sss ssss      rrrrrrrrrrr
                      ssss    ss       rrrr   rrrr
                     sssss     s       rrrr    rrrr
                     ssssss            rrrr    rrrr
                      ssssssss         rrrr   rrrr
                          ssssss       rrrrrrrrr
                    s      ssssss      rrrr  rrrr
                    ss      sssss      rrrr   rrrr
                    sss    sssss       rrrr    rrrr
                    s  sssssss        rrrrr     rrrrr
         +=======    Quality Techniques Newsletter    =======+
         +=======              July  2002             =======+

Subscribers worldwide to support the Software Research, Inc. (SR),
TestWorks, QualityLabs, and eValid user communities and other
interested parties to provide information of general use to the
worldwide internet and software quality and testing community.

Permission to copy and/or re-distribute is granted, and secondary
circulation is encouraged by recipients of QTN provided that the
entire document/file is kept intact and this complete copyright
notice appears with it in all copies.  Information on how to
subscribe or unsubscribe is at the end of this issue.  (c) Copyright
2002 by Software Research, Inc.


                       Contents of This Issue

   o  QW2002 -- Only a Few Days Left to Save $$

   o  Making Software Testing More Effective (Part 1 of 2), by Boris

   o  eValid Benefits Summary

   o  Response to Tough Times, Hard Questions, by Kieron Jarvis

   o  First International Workshop on Verification and Validation of
      Enterprise Information Systems (VVEIS-2003).

   o  Australasian Information Security Workshop (AISW-2003)

   o  Remember When (Part 2)?

   o  Workshop on Improvements to the Software Engineering Body of
      Knowledge and the Software Engineering Education Body Of

   o  2nd International Conference on COTS-Based Software Systems

   o  QTN Article Submittal, Subscription Information


              QW2002: Only a Few Days Left to Save $$

                      Fifteenth International
          Software and Internet Quality Week Conference,
            3-6 September 2002, San Francisco, CA  USA

QW2002 is the meeting place to get your questions answered, to learn
from your peers, and to take practical solutions back to your job.

Are you looking for Expert's opinions?  The QW2002 speakers are top
industry and academic leaders, gathered to share their knowledge.

Do you need an update on available Tools & Services? You will have
access to demos from the Top Tools & Services Vendors.  Don't miss
the opportunity to save your company a lot of money in the long run!

You're invited to participate in the premier Software and Internet
Quality Conference -- Quality Week 2002, San Francisco, 3-6
September 2002.


Register by August 4, and save your company $100 or more.






*** QW2002's THEME: The Wired World...

Change is very rapid in the new wired world, and the wave of change
brought about by the Internet affects how we approach our work and
how we think about quality of software and its main applications in
IT and E-commerce.  QW2002 aims to tackle internet and related
issues head on! With special presentations dealing with important
changes in the software quality and internet areas.

*** QW2002 OFFERS...

The QW2002 program consists of four days of training Seminars,
mini-Tutorials or QuickStarts, Panels, Technical Papers and
Workshops that focus on software and internet test technologies, as
experienced around the world.

  * 14 full and half-day in-depth training seminars.
  * Over 80 Presentations, in a Three-Day, Six Tracks Conference
  * Practical Keynotes with Real-World Experience from Leading
    Industry Practitioners.
  * The latest Research Results from Academia.
  * Exchange of critical information among technologists, managers,
    and consultants.
  * Lessons Learned & Success Stories.
  * State-of-the-art information on software quality and Web
  * Vendor Technical Presentations and Demonstrations
  * Latest Tools and Trends.
  * Three-Day Vendor Show/Exhibition

*** REGISTRATION SAVINGS Don't miss out. Register before August 4
and save $100 or more


| Quality Week 2002         | Phone:        [+1] (415) 861-2800 |
| SR Institute              | Toll Free (USA):   1-800-942-SOFT |
| 1663 Mission St. Suite 400| FAX:          [+1] (415) 861-9801 |
| San Francisco, CA  94103  | Email:       |
| USA                       | Web: |


        Making Software Testing More Effective (Part 1 of 2)
                            Boris Beizer

      Note:  This article is taken from a collection of Dr.
      Boris Beizer's essays "Software Quality Reflections" and
      is reprinted with permission of the author.  We plan to
      include additional items from this collection in future
      months.  You can contact Dr. Beizer at <>.

1. How to Make Testing More Effective.

There's an old cold-war proposal for solving the Soviet submarine
menace: boil the oceans.  The Soviet subs come up to cool off and
they can then be easily sunk.  Here are some ocean-boiling proposals
for testing:

  * Know which test techniques work best.

  * Automate retesting and test management.

  * Minimize test design labor.

  * Know when to stop testing.

The answer to how to make software testing more effective is the
answer to the question of why we can't boil the testing oceans.

2. The Barriers to Effective Testing

2.1. Why we don't know what does and doesn't work

2.1.1. Overview

Published statistics of test techniques effectiveness come mostly
from academic and government sources.  The academic data are suspect
because they usually deal with novice programmers working  on toy
problems.  The government data are suspect because of unusual
parameters such as security, languages, and application.  It isn't
that academic or government sponsored research in testing techniques
effectiveness is bad but rather that there are many caveats to any
attempt to extrapolate testing effectiveness data from those
environment to commercial software development

 One of the more frustrating experiences has been data I've gotten
from clients on test technique effectiveness.  I'm shown data, but
there are strings attached. Test effectiveness and bug statistics
are jealously guarded by every industrial developer I've talked to.
It's considered very private -- second only to marketing strategies.
Why do good software suppliers gather valuable statistics but won't
share that data with the industry?

1. Fear of litigation and consequential damages.

2. Fear of marketing impact on naive buyers.

3. Fear of helping competitor be better.

4. Fear of giving a weapon to competitors.

2.1.2. Litigation Fears

It's been said that we're the most litigious nation on earth. When
it comes to test effectiveness statistics, the reasoning is:

1. Test method are rated by their ability catch bugs.

2. If a technique finds bugs, it's an admission that there are bugs.

3. Because no technique is perfect, published statistics is an
admission that we release buggy products.

4. If we admit that, we've lost the lawsuit before we start.

The fear is not unfounded but our lawyers are looking at software
the wrong way.  What would happen to the medical profession if they
didn't publish death-rate statistics in fear of malpractice suits?
The death of a patient isn't medical malpractice -- the presence of
bugs isn't software malpractice.  Test effectiveness statistics can
be published without revealing total bug rates.  Most of the
important data can be "sanitized."  If absolute rates are a problem
for your lawyers and PR types, then consider publishing relative
test effectiveness data.

2.1.3. Our Naive Buyers

The technological gap between supplier and user has been growing.
The technological knowhow of a software buyer relative to a specific
software product is increasingly distant from the developer's
knowledge. There was a time when application programmers knew almost
as much about operating systems as did operating system developers;
not because application programmers were smarter then, but because
there was much less to know. Today, most software is a mystery to
most buyers and will become even more mysterious in the future.

Today's buyer is not a programmer. That's obvious for personal
computers but it applies across the board: statistics software used
by biochemists, inventory control software used by factory managers,
etc. In our drive to de-professionalize software usage, we've
created an unhappy monster -- the naive user.  Furthermore, we (the
software developers) and the media (with our help) have nurtured
this monster with our self-serving myth of "perfect software" -- it
isn't "perfect", it can't work!

We were proud: proud to master a trade which required a degree of
perfection unknown in any previous occupation: proud of an error
rate that makes surgeons look like chain-saw murderers.  Our myth
has not served us well because we've boxed ourselves in with it.
How does this relate to the fear of publishing test effectiveness or
any bug-related statistics?  If buyers, lawyers, judges, and
programmers all believe the "perfect software"  myth then any
software supplier who publishes test and bug statistics has branded
himself as a slob who produces bug-ridden software.  We created that
myth out of ignorance and arrogance.  Now that it's working against
us, it's up to us to correct the naive user's unrealistic

2.1.4. Helping Our Competitors

If we publish the statistics, then our competitors will get better
-- that's an easy one to dispose of.  I would hope that competitors
get better as a result. This is a groundless fear because there are
far greater information leaks -- the biggest is the high turnover of
programmers.  What advantage could a competitor gain from such
"inside knowledge" of test effectiveness?  The test technique used
is dictated by the functional and structural nature of the product,
secondarily by the phase of testing (e.g. unit, integration,
system), and thirdly by the development environment (e.g. available
tools, organization, culture).  I doubt that any statistics would
cause a rational competitor to radically shift his testing tactics
or strategy. Perhaps, a change might occur over a period of years as
industry-wide statistics emerged.  But any supplier who say's "X is
doing great things with domain testing, let's use the same
proportion they're using", will not be a competitor for long because
if they chase after that rainbow, they're probably dissipating
scarce resources chasing after more dangerous rainbows.

We must adopt an industry rather than a parochial view.  Nobody in
this business can afford controlled experiments on the software
development process. The only way that we will grow is to learn from
the successes and failures of others -- especially of competitors.
The advantage to the programming community of open publication of
vital statistics, of success stories, and more important, of
miserable failures, is far greater than the putative loss of
competitive advantages.

2.1.5. A Weapon for Competitors?

The last fear is the fear of giving a competitor a weapon.  A
publishes test effectiveness and bug statistics.  B (an unscrupulous
low-life), uses A's own data to illustrate how buggy A's software
is.  It's an interesting fear, and it's often been expressed to me,
but I've never seen it happen. I've never seen a software supplier
who would open that can-of-worms because there's none so confident
that they would open themselves to being asked about their bug
statistics.  Remember that the ban against gas warfare hasn't been
upheld for humanitarian reasons but because gas warfare was a weapon
too prone to backfire to be worth the trouble.

2.1.6. Do We Understand the Statistics When We Get Them?

When results are published, it's often impossible to compare
different data because:

1. Inconsistent terminology.

2. Inconsistent definition of technique used.

3. Incompatible methods of measuring program size.

4. Inconsistent definitions of "bug", "fault", etc.

5. Slipshod and/or creative statistics.

The terminology, definitions, and metrics issues is part of a
broader question of standards, which I'll discuss later.  Creative
or slipshod statistics will be with us for a while yet because the
notion of applying statistics to the software development process is
still new for most of us. Time will take care of that.  The starting
point for every software development group, and therefore for the
industry, is the implementation and application of suitable metrics
for internal use.  If you don't have such a process in place, you
should initiate it.  You cannot hope to evolve or apply an effective
software development process if you don't provide a continual,
quantitative, measurement of that process's effectiveness.  You
can't expect the emergence of useful tools and services if you don't
guide the potential supplier by giving them your (quantitative)

2.2. Why Things Aren't Automated (Yet).

2.2.1. Overview

The testing process and the management of the testing activity still
has long way to go in terms of automation because:

1. Testing is still a third-world activity.

2. No product-line testing environments.

3. A chicken-versus-egg case with most opting for "chicken".

2.2.2. The Software Third World

I have, in the past, urged revolution -- software testers and
quality assurance should take over the software development process.
I meant this, and still mean it, in the sense that quality assurance
has taken over the manufacturing process.  Software development as
done in most places is dominated by two superpowers: design and
marketing.  What tester hasn't felt like the sorry citizen of a
third-world banana republic?  Don't be satisfied because they've
upped your budget from ludicrous to unrealistic.  Don't be
comfortable because you can now hire the best instead of making do
with development's cast-off programmers.  Don't relax because
independence now means that you have equal say with development --
keep pushing for veto power.  We want to change the software
development process to a quality-driven process rather than a
features-driven process.  If you answer "yes" to any of the
following, you're still in the third world.

1. You block a release but the design group overrides you.

2. An executive explains "the realities" of software delivery  to
you and urges you to reconsider your objections.

3. You don't have a capital budget for tool acquisition.

4. There's no money or schedule slack for training people in tool

5. Your salaries are less than those in design groups.

6. You have more second and third shift hours per worker than does
development, or lower priority for batch runs.

2.2.3. The Product-Line Testing Environment

Our hardware, software, and programming languages accurately reflect
our software development process. Like the software development
process, they too are feature-driven rather than quality-driven. Can
you point to any instructions in the machine's repertoire designed
to abet software testing?  Are there any operating system features
and utilities designed with the express purpose of software testing
in mind?  Does your language processor produce cover monitoring,
data flows, and call trees, to name a few?  We're making do with a
development environment whose philosophy predates serious concern
about testing: an environment driven by the software developer's
needs and not by the tester's needs.  The hardware people are ahead
of us on this. More and more of a chip's real estate is devoted to
circuits which make it possible to test the chip. But so far, we've
got nothing on that chip to make software testing easier.  If
software test and QA are so important, why isn't half the code in
the operating system or in the language processor devoted to it?
I'd settle for 5%.

2.2.4. Tools Types of Tools

We're getting tools -- but.  Rather than talk about specific tools,
let's categorize tools as follows: strap-on, stand-alone, and
services. Strap-on Tools

Strap-on tools are an attempt to correct the deficiencies of a
software development environment by putting in a tool that works
within that environment.  Most test drivers and coverage tools fall
into this category.  For now, any such tool which you can build,
buy, borrow, steal, or adapt to your environment, is probably worth
the price. In the long run,  they're a bad solution because such
functions cannot be really good until they are developed from the
ground up and fully integrated with the software development
environment. Strap-on tools are inconvenient.  They require multiple
passes over the same data, multiple copies of the same program,
variant data structures, more complicated configuration control,

How long will it be before strap-on tools are replaced by fully
integrated tools?  Considering the development cycle for most
operating systems and language processors, I estimate that it will
be at least 101 years before a well-equipped (from the point of view
of testing tools) operating system emerges.  So the tool buyer can
anticipate another decade of making do with strap-ons.  Which is to
say that you can expect to get your money's worth out of a good
strap-on tool even though a much better fully integrated tool is
theoretically possible.  From the tool vendor's point of view,
there's a good temporary market for such tools, but you better be
thinking of how to license your hot proprietary tool to the
operating system vendor as an integrated product. Stand Alone Tools

We have stand-alone tools.  We can subdivide these into two major
types: general purpose test tools and special purpose.  Both of
these can be systems in their own right.  The general purpose test
tool, like the strap-on tool, is a short term winner and long term
loser.  It will eventually be replaced by fully integrated features
of (good) operating systems.  The special purpose, stand-alone tool
is a better long-term winner.  These tools are designed to satisfy a
specific, but limited test objective.

We have many examples in the communications field: load generators
for performance testing, protocol testers, environment simulators.
Every well-defined application is a potential market for such tools:
electronic fund transfers, automated bank tellers, payroll, general
accounting, inventory control, tax packages, telecommunications, to
name a few.  The payoff to the tool buyer can be great.  The cost of
designing a good test suite or generator can far exceed the cost of
design of the tested software.  Don't expect such tools to be cheap.
The developer may have to amortize his development cost over a very
small market.  And even after that's done, who can blame the
developer if he chooses to value-price the product at 25%-50% of
your expected development cost for that test suite? Services

Most of us don't think of testing services as a "tool".  It is a
tool and good one.  First, it's the cheapest and cleanest way to get
true testing independence.  Second, but equally important, it's a
good way to get a kind of testing expertise that you might need only
once.  This is especially  so for application  specific testing.
Some testing areas are very specialized and it takes a long time to
develop the expertise: security, communications, and performance are
three examples of such specialized testing areas.  Whether a testing
need is satisfied by the evolution of a special purpose test tool or
by a service (that may employ special tools) will depend on the
complexity of the tool and the embedded expertise.  As an example,
performance testing, analysis, and modeling is best handled by a
service, but capacity planning (which incorporates many of the same
elements) can be embedded in a (complicated) toolkit if the
environment is stable and the application not overly exotic.  Just
as it has happened for many software areas, we can expect the same
to happen to software testing areas: those you design yourself,
those you adapt, those you buy as a product, and those you buy as a
service, as well as various combinations. Tool Training

A tool without training is just an ornament. You can expect the cost
of training testers and programmers in the use of any tool to exceed
the tool's purchase or development cost by a factor of 10 or more.
Such (usually internal) training costs should be budgeted as
explicit line items. If you don't make these costs explicit then the
introduction of any tool will appear to reduce programming and test
development productivity. Buyers should think in terms of a $5000
tool and $50,000 in training.  Tool vendors should think in terms of
the greater profit potential in tool usage training.  This high
training cost is another factor which can make the use of a service
more attractive than the purchase of a tool -- presumably, the
service's supplier doesn't need the training.

2.2.5. Chicken Versus Eggs.

Every test tool buyer, every test tool builder or vendor, and every
supplier or buyer of a testing service is an entrepreneur today.
Although I use the terms "buyer", "supplier", "vendor", "market",
and "venture capitalist" as if these are all in distinct
organizations, you should understand that you can have all of these,
or equivalent, elements within a given organization.

The supplier wants to provide a product or service in response to a
perceived market.  The normal way to do this in other industries is
to do a market survey and combine the results thereof with prior
experience with similar products. The new product is usually a small
departure from the previous product so that there can be a lot of
confidence in the result. By contrast, in our situation, we have no
previous history and the supplier doesn't really know what will and
won't sell and what will and won't work. The potential supplier has
only eloquence rather than facts with which to convince the venture
capitalist who puts up the bucks.  It's a high-risk situation.

Having built my share of tools which were technological wonders but
flops in the marketplace, I know whereof I speak.  You're not going
to get the tools and services you want and need unless someone takes
the risk. They're not going to have the bucks to take the risk
unless they con some venture capitalist into believing that there's
a market out there.  Who's to know what the market is unless you,
the potential buyer, speaks up? Speaking up means defining what you
want, what you're willing to pay for it, and showing the potential
vendor the color of your money -- i.e. that you've got a budget for
tool and services.

The potential vendor is also hampered by lack of industry test and
bug statistics -- the very statistics that you don't publish.  Such
statistics, even if only relative, help to define the technical
areas in which the payoff for a tool is likely to be greatest.  Most
tool builders have hundreds of technically good ideas but only their
personal experience, hearsay, and intuition to guide them in
selecting the one idea which is likeliest to be a hit with the
potential user. It's a chicken versus egg situation.  There's no
market until the tool exists and no tool development money until the
market is demonstrated.  No wonder most of us opt for "chicken".

It's said that a neurotic is one who builds castles in the air and
that a psychotic is one who lives in them.  What does that make the
real estate agent, the developer, the architect, and the person who
just wants to look at the place?  We're all going to have to be a
little bit psychotic (as is any risk taker) to break the tools and
services logjam. The buyer may have to go to her management and ask
for a long-term tools and services budget without being able to
specify exactly what it is that will be bought for it.  The supplier
is going to have to continue substituting eloquence for facts and
the money man is going to have to adopt the position of a high-
risk/high-reward venture capitalist.

                          To Be Continued


                    eValid -- Benefits Summary

eValid, the Test Enabled Web Browser, uses patent-pending InBrowser
technology to integrate all WebSite testing functions into a full-
function Web browser.  eValid runs on Windows NT/2000/XP and puts
every test and analysis feature at your fingertips -- right next to
the pages you are testing.

                    WebSite Quality Enhancement

Make sure you WebSite keeps your users happy by eliminating common
performance and structural problems.

  * Broken Links Eliminated.  Finds broken or unavailable links,
    even on dynamically generated WebSites.
  * Slow-loading Pages Eliminated.  Automatic analysis of aggregate
    loading times gives early warning of problem pages.
  * Better Balanced WebSite -- Adjusted Connectedness.  Identify
    under- and over-connected pages on your site.
  * Eliminates Page Performance Bottlenecks.  Detailed page timing
    is able to pinpoint problems fast.
  * Overly Complex Pages Eliminated.  Avoids page complexity bloat!
    Automatic analysis of too-complex pages
  * Out of Date, Over-Sized Pages Eliminated.  Searches your site
    for out-of-date and oversized pages.


Get the most out of your QA/Test effort, and tie your developers
into the process early for super productivity.

  * Reduced Time to Market.  Scripts and processes are all available
    from the eValid browser --  making it easier for your WebSite
    developers to see what they have.
  * Easy to Learn, User Friendly Interface.  Improved QA personnel
    productivity -- with minimal training.
  * Fast, Easy, Test Recording.  WYSIWYG scripting recording
  * Ultra-compact scripts, optional full script annotation.  More
    scripts in less time, with fewer mistakes, and fewer re-creates.
  * Handles Everthing.  A truly universal test engine:  HTTP, HTTPS,
    Applets, JavaScript, XML, dynamic pages, FLASH objects.
  * Robust Playback.  Scripts are object oriented and adapt
    dynamically to WebSite changes.
  * Rich Scripting Language.  Over 125 powerful commands and
    options.  Error and Alarm processing and built-in test recovery
  * Powerful Variable Test Data Creation.  Multiple data generation
    modes: synthesized, sequential, random.
  * Simple LoadTest Scenario Creation.  Functional tests check into
    LoadTest Scenario definitions.
  * 100% Realistic Loading.  No guesswork, no approximations, no
    "virtual users."  Multiple browsers are 100% real, with variable
    fidelity options, and variable footprint playback engines.
  * Documentation & Examples Online.  The latest, updated
    documentation and live examples are at your fingertips with 100%
    online documentation.
  * Integrated Test Suite Development.  eV.Manager banks suites of
    tests for use and reuse.
  * Single Product, Unified Operation.  Unified files, formats,
    operation, documentation for all WebSite functions -- saves on
    training costs!
  * Flexible Interfaces.  All scripts and reports are in database-
    ready format -- minimizes interface problems to other WebSite
    development efforts.
  * Comprehensive Logfiles, Charts.  Playback results logs all are
    HTML and spread-sheet ready.  Built-in charts for all logs.
    Batch Processing.  Command line (batch processing) for every
    possible playback option.
  * Superior Pricing.  Best pricing and most flexible licensing

                  Better User/Customer Experience

Assure your customers have the best WebSite  experience possible.

  * Assured Application Availability.  With deep transaction
    testing, whatever your users do, you can program to use as a
    confirmation base.
  * Simplified Internal Monitoring.  Any script can become a
    quality/performance monitoring monitor.
  * Assured Server Capacity.  Uses only realistic (not simulated)
    user loading.  The capacity you test for is the capacity you
  * Improved Throughput Analysis.  Built-in performance charts
    identify problem areas quickly, accurately.  Integrity Assured.
    Analyzes your entire WebSite automatically to assure you 100%
    integrity, to help maximize your WebSite ROI and eliminate
    customer complaints.


              Response to Tough Times, Hard Questions
                           Kieron Jarvis
                     Senior Test Engineer

      This is the very lightly edited response by Jarvis to
      the article about "Hard Questions" that appeared some
      months ago.

As I have been learning on-the-job without the benefit of training,
nor experienced colleagues, nor good development processes, I am
hardly in a position to speak authoritatively on your questions.
But, these same short-comings may also be the key to an alternative
view worth considering.  (They also may not, but I'll let you be the
judge of that.)

Here are my random ramblings for your consideration:

For the most part quality is subjective and the importance of it is
even more so.  I am one of those people who tends to judge an
organization by their output and consider quality to be important.
But, for every person like me, there is another in the world who
basically couldn't care less.  The subjective importance you place
upon quality will generate expectations of quality. These can be
improved or eroded by various factors, not least of which is the
general level of quality for the environment you're producing for.
Even those on the opposite end of the scale to myself will have
basic expectations of quality which can still be modified by
external factors.

Advertising, public opinion, current events, range of alternatives,
"quality" of alternatives, even cost, all of these factors affect
quality. Why did Amstrad put cooling fans into machines that didn't
need them? Because of a public perception of quality. Is Boeing a
"quality" plane? Ask people just after a major air crash and you'll
get a different opinion.

How do we define quality in broad enough terms to appeal to most
people's subjective view of quality?  How do we then go about
quantifying, evaluating, producing, and testing such a subjective,
and some might say ephemeral, subject as "quality"? 1 in 6 Concordes
have burst into flames and crashed on take-off. Does this mean that
they are not a quality product? Anything less than perfection will
fail to meet the perceived "quality" that some people expect and
even perfection would not achieve "quality" for some people due to
their view of "quality".

Take as a practical example of the subjective nature of quality the
emphasis that some place on the country of origin or the company
involved. These people will compare "quality" against the yardstick
of their preferred source, sometimes regardless of the actual
quality of the source.  "French champagne is quality. Does this
champagne taste like French champagne? No! Therefore it is poor
quality."  "Microsoft is the biggest software company in the world.
Does this software have the same functions that the Microsoft
software has? No!  Therefore it is poor quality."  "This product
comes from UK/USA/Australia therefore it must be good quality."

This may be really basic stuff to you, but to me it seems to aim at
the heart of some of your questions. Let me now address your
questions in turn. I will assume that our yardstick of quality as
testers has to be a fairly high standard, but will try to qualify my
responses in light of the subjective nature of quality.

> Quality Technology: Does there exist a sufficient technological
  base, plus good enough tools, to really deal with software quality
  issues.  After decades of tools and systems and approaches and
  methodologies, software systems continue to fail?  What is the
  software quality community doing wrong?

I would say that the technological capability exists, but the
technological base from which we work, along with the tools we use,
is not of a sufficient standard to produce consistent high quality.
By necessity, we use software and hardware that is of varying
quality in production of our "quality" products. It is said that a
student cannot know more than his master. How then can we expect to
produce quality that exceeds the quality of its environment? You
cannot make a silk purse from a sow's ear.

> Issues on The Web:  Does quality really make a difference on the
  Web?  Can users get along just fine without any checking?  If so,
  why are there so many slow and unreliable WebSites?  Is the web
  infrastructure really secure?  Has quality technology done
  anything good for the Web?

This is an example of external factors modifying the standard of
quality. The quality we see on the web makes a difference to our
perception of quality. So, quality does make a difference to
people's perceptions on the web, which in turn affects the standards
of quality applied to products on the web. In my opinion it is
making a difference in a negative sense, as it is conditioning
people to accept lower standards of quality on the web. This is why
so many slow and unreliable websites.

I don't believe the web infrastructure is secure, but this is not
related to the quality of web content.  I don't know if quality
technology has done anything good for the web.

> Industry Awareness of Quality Issues:  We all know the reality,
  that software quality may be great to talk about, but why is
  software quality nearly always the first casualty when budgets are
  tight?  Do contemporary software development projects really need a
  software quality component?  If so, what?

One reason quality is the first casualty because it's so subjective
and ephemeral. We can count lines of code. We can see whether
development has produced a product that achieves the broadly stated
needs. But it's much harder to see how much quality we have.

Another reason quality suffers is because of the people making the
decisions. In many cases they have come from a background of
programming and understand it well, but they've had little contact
with QA/testing and fail to see it's importance. I believe that
contemporary software projects do need a quality component. At a
base level it is just checking that the product works (to some
extent) and that it meets the perceived needs. From this base level
it's then determined how much more quality they want and to plan for
that. You will get less quality if a product is checked for an hour
by the person who wrote it than if it is checked for two weeks by a
dedicated team of QA/testers (assuming their findings are acted

> What About XP?  Does the Extreme Programming approach obviate the
  need for testing, and monitoring, and software quality control?
  Do the decades of software engineering R&D really mean so little?
  Why is XP so appealing?  Is it because XP reflects reality?

If for no other reason than the subjective nature of quality, XP
cannot replace quality processes. The developers are working to
their own perception of quality ("It's good enough"), unhindered by
any other view. More on XP I can't really say.

> What about CMM and SPICE and ISO/9000?  Do these standards and
  process oriented really work?  Are they worth the cost and
  trouble?  Do they improve anything?  If they don't, why are they
  still around?  Isn't good, solid thinking and careful
  implementation what this is really all about?  If so, what's the fuss?

As a broad rule-of-thumb everything done to improve quality will
improve quality to some extent. The few notable exceptions to this
should be rare enough to have already been discarded. As to the
value of them, that has to be judged by the value that the company
places upon quality.  If quality were less subjective it would be
easier. "Will you pay $10K to ensure that your new module won't
crash?" is a lot easier to ask than "Will you commit to $5-15K to
decrease the likelihood of a major issue being released?"

> Security and Integrity?  How insecure is the internet
  infrastructure, really?  Are things actually at a crisis point?
  Or is it hyped up by a few for commercial gain?

I don't know. I'm not an insider to the information needed to answer
these questions. The nature of the Internet infrastructure leads me
to believe that it's as unstable as it's purported to be. If you
compare the Internet infrastructure to a highway infrastructure it
helps to show some of the inherent problems, but this analogy will
still fall down.  Most countries have their own national highway
systems run by the national government, but some don't, beyond the
highways you have various 'levels' of road infrastructure governed
by states, counties, shires, towns, large companies, small
businesses, or even by individuals (you have a driveway don't you?).
If you wanted to send a piece of information by road from any place
on earth to any other, how could you be sure that nothing would
happen to it?  You couldn't!  You would take precautions and hope.
The level of precautions taken may improve the security of it, but
you cannot guarantee that it's safe. Similarly for the Internet.

I don't know if any of this really means anything in relation to
your questions, but that's how I see it from my little corner of the
world. A response would be appreciated. Even if just an
acknowledgement of receipt.


                  First International Workshop on
                   Verification and Validation of
                   Enterprise Information Systems
                  April 22, 2003 - Angers, France

In conjunction with the Fifth International Conference on Enterprise
Information Systems - ICEIS 2003 (

Co-Chairs:  Juan Carlos Augusto  ( and Ulrich
Ultes-Nitsche (, Declarative Systems and
Software Engineering Research Group University of Southampton  SO17
1BJ,  Hampshire, United Kingdom


It is the aim of this workshop to stimulate the exchange of
ideas/experiences of practitioners, researchers, and Engineers
working in the area of validating/verifying software for enterprise
information systems (EIS).   We will welcome both practical and
theoretical papers, including case studies, from all areas related
to increasing the confidence in the correctness of EIS software,
such as:

      Large Scale Component Based Development
      Specification-based testing and analysis
      Reuse of specifications and proofs
      Combination of verification systems
      Quality control and assurance
      Software Architecture
      Application Integration
      Case studies
      Quality attributes
      Safety critical systems
      Model checking
      Process algebra
      Deductive systems
      Formal methods
      Petri nets
      Consistency Checking  and Data Integrity
      Finite-state abstractions of infinite-state systems

                          PROGRAM COMITEE

Glenn Bruns, Bell Labs (USA)
Jirtme Delatour, ESEO (France)
Stefania Gnesi, National Research Council (Italy)
Andy Gravell, University of Southampton (UK)
John Grundy, University of Auckland (NZ)
Alan Hu, University of British Columbia (Canada)
William Lam, SUN Microsystems
Jose Maldonado, Universidade de Sao Pablo (Brazil)
Radu Mateescu, INRIA (France)
Pedro Merino Gsmez, Universidad de Malaga (Spain)
Daniel Moldt, University of Hamburg (Germany)
A. Jefferson Offutt, George Mason University (USA)
Alfredo Olivero, UADE (Argentina)
Marc Roper, University of Strathclyde  (UK)
Lone Leth Thomsen, Aalborg University (Denmark)
Pierre Wolper, University of Liege (Belgium)


                          CALL FOR PAPERS

In conjunction with Australasian Computer Science Week Adelaide
Convention Centre, Adelaide, South Australia, Adelaide, South
Australia, 4-7th February, 2003


  P. Montague, Motorola Australia Software Centre, Australia
  C. Steketee, U of South Australia, Australia


  H. Armstrong, Curtin U, Australia
  C. Boyd, Qld U of Technology, Australia
  E. Dawson, Qld U of Technology, Australia
  A. Farkas, Tenix Defence, Australia
  E. Fernandez, Florida Atlantic University, USA
  D. Gritzalis, Athens U of Economics and Business, Greece
  A. Koronios, U of South Australia, Australia
  J. McCarthy, DSTO, Australia
  M. Ozols, DSTO, Australia
  G. Pernul, U of Essen, Germany
  G. Quirchmayr, U of Vienna, Austria
  J. Slay, U of South Australia, Australia
  M. Warren, Deakin U of Technology, Australia
  R. Safavi-Naini, U of Wollongong, Australia


  J. Roddick, Flinders U of South Australia, Australia
  J. Harvey, Motorola Australia Software Centre, Australia

  - Secure electronic commerce
  - Secure electronic government
  - Security standards and practices
  - Security product evaluation and certification
  - Critical infrastructure protection
  - Legal and policy issues
  - Information security management
  - PKI implementation
  - Security of mobile, wireless and ad-hoc communication
  - Peer-to-peer networks
  - Digital rights
  - Smart cards and other cryptographic tokens
  - Biometrics and other authentication technologies
  - Formal modeling and information security.


                      Remember When (Part 2)?

This was passed on by a friendly "younger person"..

Didn't that (referring to Part 1 which appeared in the prior issue)
feel good? Just to go back and say, "Yeah, I remember that"? I am
sharing this with you today because it ended with a double dog dare
to pass it on.

To remember what a double dog dare is, read on.

And remember that the perfect age is somewhere between old enough to
know better and too young to care.

How many of these do you remember?

> Candy cigarettes
> Wax Coke-shaped bottles with colored sugar water inside
> Soda pop machines that dispensed glass bottles
> Coffee shops with tableside jukeboxes
> Blackjack, Clove and Teaberry chewing gum
> Home milk delivery in glass bottles with cardboard stoppers
> Party lines
> Newsreels before the movie
> P.F. Fliers
> Telephone numbers with a word prefix....(Raymond 4-1601).
> Peashooters
> Howdy Dowdy
> 45 RPM records
> Green Stamps
> Hi-Fi's
> Metal ice cubes trays with levers
> Mimeograph paper
> Beanie and Cecil
> Roller-skate keys
> Cork pop guns
> Drive ins
> Studebakers
> Washtub wringers
> The Fuller Brush Man
> Reel-To-Reel tape recorders
> Tinkertoys
> Erector Sets
> The Fort Apache Play Set
> Lincoln Logs
> 15 cent McDonald hamburgers
> 5 cent packs of baseball cards
> ...with that awful pink slab of bubble gum
> Penny candy
> 35 cent a gallon gasoline
> Jiffy Pop popcorn

Do you remember a time when..........

>  Decisions were made by going "eeny-meeny-miney-moe"?
>  Mistakes were corrected by simply exclaiming, "Do Over!"?
>  "Race issue" meant arguing about who ran the fastest?
>  Catching the fireflies could happily occupy an entire evening?
>  It wasn't odd to have two or three "Best Friends"?
>  The worst thing you could catch from the opposite sex was "cooties"?
>  Having a weapon in school meant being caught with a slingshot?
>  A foot of snow was a dream come true?
>  Saturday morning cartoons weren't 30-minute commercials for action figures?
>  "Oly-oly-oxen-free" made perfect sense?
>  Spinning around, getting dizzy, and falling down was cause for giggles?
>  The worst embarrassment was being picked last for a team?
>  War was a card game?
>  Baseball cards in the spokes transformed any bike into a motorcycle?
>  Taking drugs meant orange-flavored chewable aspirin?
>  Water balloons were the ultimate weapon?

If you can remember most or all of these, then you have lived!!!!!!!

Pass this on to anyone who may need a break from their "grown-up"

 . . . . . I double-dog-dare-ya!


                  Workshop on Improvements to the
          Software Engineering Body Of Knowledge (SWEBOK)
      Software Engineering Education Body of Knowledge (SEEK)

                          OCTOBER 6, 2002
                  Ecole de technologie superieure
                      Montreal, Quebec, Canada


STEP 2002 will be held in conjunction with:  IEEE International
Conference on Software Maintenance ICSM'2002
Workshop Co-chairs

Pierre Bourque (SWEBOK)
Ecole de technologie superieure
Telephone: (1) 514-396-8623

Timothy Lethbridge (SEEK)
University of Ottawa
Tel: (613) 562-5800 x6685


The purpose of this workshop is to produce detailed improvement
proposals to two related international initiatives: the Guide to the
Software Engineering Body of Knowledge (SWEBOK) project
( and the Software Engineering Education Body of
Knowledge (SEEK) that is being produced within the context of the
development of a model software engineering undergraduate curricula
(  Though both bodies of knowledge
strongly overlap, their scope, their specifications, their target
audiences and their intended uses are somewhat different therefore
both initiatives can learn from each other.  Results of this
workshop will serve as input to a second (to be confirmed) workshop
on this topic at the 2003 Conference on Software Engineering
Education and Training ( that will
be held in March 2003 in Madrid.

Guide to the Software Engineering Body of Knowledge (SWEBOK)

The Guide to SWEBOK is a project of the IEEE Computer Society to
characterize the discipline of software engineering and to provide a
topical guide to the literature describing the generally accepted
knowledge within the discipline. The Guide is intended to be useful
to industry, to policy-making organizations and to academia, for
instance, in developing educational curricula and university degree
program accreditation criteria.

The Guide describes the portion of each KA that is both generally
accepted and applicable to all software systems. Research topics and
specialized topics are out of scope for the current version. The
Guide also recognizes a list of related disciplines that are
important to software engineers but not treated in this Guide. The
level of coverage and depth is appropriate for a degreed software
engineer with four years of professional experience.

The Guide is currently available in a Trial Use Version that can be
downloaded for free from the project web site. The trial use version
was developed through a managed consensus process involving 8,000
comments from close to 500 reviewers in 42 countries. During the
next years, trial usage and experimentation will lead to further
refinement of the document.

The SWEBOK Guide has been endorsed for trial usage by the Board of
Governors of the IEEE Computer Society and is undergoing its final
vote by the International Organization for Standardization (ISO) for
publication as a Technical Report.

Software Engineering Education Body of Knowledge (SEEK)

The SEEK forms part of what will be called Computing Curriculum -
Software Engineering (CCSE), an ACM/IEEE Computer Society initiative
to define undergraduate software engineering curriculum
recommendations. CCSE complements the long standing Computing
Curriculum - Computer Science, as well as similar volumes being
developed for computer engineering and software engineering.

CCSE has been under development since 2001. The first stage is to
produce SEEK, and the second stage will be to define how the
contents of SEEK can be put together to form coherent sets of
courses. As of late Summer 2002, SEEK had ten knowledge areas.  The
contents of each knowledge area had been developed by committees of
volunteers, using SWEBOK as one of their key starting points. The
results were then reviewed and improved by participants at an
international workshop and by the CCSE steering committee. The draft
document was sent for review by top experts in software engineering
in July 2002. By the time of this workshop SEEK will be in an
intensive public review stage.

SEEK proposes core topics that every software engineering student
must be taught, as well as elective topics. The core topics include
not only knowledge that is clearly part of software engineering, but
also knowledge from foundational areas such as mathematics, computer
science, professional ethics etc.

For further information on this workshop, please contact Pierre
Bourque at (


2nd International Conference on Commercial Off-The-Shelf (COTS)-Based
                     Software Systems (ICCBSS)
                        February 10-12, 2003
                           Marriott Hotel
                           Ottawa, Canada
              World Wide Web: <>

                 Information at:

Theme: Multiple Paths, Multiple Solutions Building on the
international interest and tremendous success of ICCBSS 2002, held
in Orlando this year, the National Research Council Canada, the
Software Engineering Institute, the Center for Software Engineering,
and the European Software Institute are pleased to announce the 2nd
International Conference on COTS-Based Software Systems.

Conference Description:  Many researchers and practitioners
throughout the world are working on COTS software issues.  Using
COTS software components isn't a shortcut for building the same old
software systems in the same old way; it's a whole new approach that
involves multiple paths and multiple solutions, vital to building
and sustaining industry and government systems.  Using COTS software
helps reduce development and maintenance costs and matches the pace
of technological advances, but it creates new pressures.

The ICCBSS Conference focuses on the challenges of creating and
maintaining COTS-based software systems. Industry and government
systems are relying more and more on commercial COTS software to
keep development and maintenance costs low and to keep pace with
technological advances.  The ICCBSS Conference provides an annual
forum where researchers and practitioners from industry, government,
and universities can gather to exchange ideas and results.

all for Contributions:  The Program Committee is soliciting
submissions for original papers, experience reports, tutorials,
panel discussions, and poster sessions.  We invite commercial and
government COTS acquisition leaders, COTS program managers,
integrators and researchers who have experience with building,
maintaining, and managing COTS-based software systems to join us in
Ottawa, Canada at ICCBSS 2003 for an expanded program of tutorials,
panels, and paper presentations.

Attendee participation is what makes any conference a success.  Join
us to exchange ideas about current best practices and promising
research directions for working with COTS software in large or
critical systems.

If you are interested in participating -- either by submitting a
contribution or by attending ICCBSS 2003 -- please visit our Web
site for more details:  

ICCBSS2003 contact:

Pierre Lamoureux
ICCBSS2003 Conference Secretariat
National Research Council Canada
Bldg. M-19, 1200 Montreal Road
Ottawa, Ontario, Canada K1A 0R6
Phone: 613 / 993-9431
FAX: 613 / 993-7250

    ------------xxx QTN ARTICLE SUBMITTAL POLICY xxx------------

QTN is E-mailed around the middle of each month to over 10,000
subscribers worldwide.  To have your event listed in an upcoming
issue E-mail a complete description and full details of your Call
for Papers or Call for Participation to <>.

QTN's submittal policy is:

o Submission deadlines indicated in "Calls for Papers" should
  provide at least a 1-month lead time from the QTN issue date.  For
  example, submission deadlines for "Calls for Papers" in the March
  issue of QTN On-Line should be for April and beyond.
o Length of submitted non-calendar items should not exceed 350 lines
  (about four pages).  Longer articles are OK but may be serialized.
o Length of submitted calendar items should not exceed 60 lines.
o Publication of submitted items is determined by Software Research,
  Inc., and may be edited for style and content as necessary.

DISCLAIMER:  Articles and items appearing in QTN represent the
opinions of their authors or submitters; QTN disclaims any
responsibility for their content.

STW/Regression, STW/Coverage, STW/Advisor, TCAT, and the SR logo are
trademarks or registered trademarks of Software Research, Inc. All
other systems are either trademarks or registered trademarks of
their respective companies.

        --------xxx QTN SUBSCRIPTION INFORMATION xxx--------

To SUBSCRIBE to QTN, to UNSUBSCRIBE a current subscription, to
CHANGE an address (an UNSUBSCRIBE and a SUBSCRIBE combined) please
use the convenient Subscribe/Unsubscribe facility at:


As a backup you may send Email direct to <> as follows:

   TO SUBSCRIBE: Include this phrase in the body of your message:
           subscribe <Email-address>

   TO UNSUBSCRIBE: Include this phrase in the body of your message:
           unsubscribe <Email-address>

Please, when using either method to subscribe or unsubscribe, type
the <Email-address> exactly and completely.  Requests to unsubscribe
that do not match an email address on the subscriber list are

		Software Research, Inc.
		1663 Mission Street, Suite 400
		San Francisco, CA  94103  USA

		Phone:     +1 (415) 861-2800
		Toll Free: +1 (800) 942-SOFT (USA Only)
		Fax:       +1 (415) 861-9801
		Web:       <>