sss ssss      rrrrrrrrrrr
                      ssss    ss       rrrr   rrrr
                     sssss     s       rrrr    rrrr
                     ssssss            rrrr    rrrr
                      ssssssss         rrrr   rrrr
                          ssssss       rrrrrrrrr
                    s      ssssss      rrrr  rrrr
                    ss      sssss      rrrr   rrrr
                    sss    sssss       rrrr    rrrr
                    s  sssssss        rrrrr     rrrrr
         +===================================================+
         +=======    Quality Techniques Newsletter    =======+
         +=======            December 2002            =======+
         +===================================================+

QUALITY TECHNIQUES NEWSLETTER (QTN) is E-mailed monthly to
Subscribers worldwide to support the Software Research, Inc. (SR),
TestWorks, QualityLabs, and eValid user communities and other
interested parties to provide information of general use to the
worldwide internet and software quality and testing community.

Permission to copy and/or re-distribute is granted, and secondary
circulation is encouraged by recipients of QTN provided that the
entire document/file is kept intact and this complete copyright
notice appears with it in all copies.  Information on how to
subscribe or unsubscribe is at the end of this issue.  (c) Copyright
2002 by Software Research, Inc.

========================================================================

                       Contents of This Issue

   o  The State of the Practice of Software Engineering: Guest
      Editor, Robert Glass

   o  eValid Updates and Specials

   o  Testing: Seven Deadly Virtues and Seven Cardinal Sins, By
      Boris Beizer

   o  Modeling and Developing Process-Centric Virtual Enterprises
      with Web Services

   o  Third International Workshop on Web Based Collaboration
      (WBC'03)

   o  International Workshop on Web Services: Modeling,
      Architecture, and Infrastructure (WSMAI 2003)

   o  Agile Development Conference

   o  Special Invitation: NJCSE Workshop on Software Reliability

   o  QTN Article Submittal, Subscription Information

========================================================================

         The State of the Practice of Software Engineering
                     Guest Editor: Robert Glass

This special issue will focus on the actual current practice of
software engineering in industry what software engineering practice
is, and, by implication, is not. The issue will present an accurate,
baseline view of practice upon which other practitioners, and
software engineering researchers, can build as they prepare us for
the future. It will describe current practice, not judge it. This
issue will not, for example, focus on "best practices."

Submissions must present information that accurately characterizes
software engineering as currently practiced. Surveys, for example,
are appropriate; case studies, unless their findings can be
generalized to a broad spectrum of practice, are not. Studies of
particular domains of practice are welcome. The issue will consist
of contributed papers as well as invited articles from people
intimately familiar with software practice.

Submission deadline: 15 April 2003 Publication date:
November/December 2003

Send preliminary queries to the guest editor:

Robert L. Glass
Editor/Publisher
The Software Practitioner
1416 Sare Rd.
Bloomington, IN 47401
rlglass@acm.org

========================================================================

                    eValid Updates and Specials
                     <http://www.e-valid.com>

               Purchase Online, Get Free Maintenance

That's right, we provide you a full 12-month eValid Maintenance
Subscription if you order eValid products direct from the online
store:  <http://store.yahoo.com/srwebstore/evalid.html>

                 New Download and One-Click Install

Even if you already got your free evaluation key for Ver. 3.2 we
have reprogrammed the eValid key robot so you can still qualify for
a free evaluation for Ver. 4.0.  Please give us basic details about
yourself at:
<http://www.soft.com/eValid/Products/Download.40/down.evalid.40.phtml?status=FORM>

If the key robot doesn't give you the keys you need, please write to
us  and we will get an eValid evaluation key
sent to you ASAP!

                     New eValid Bundle Pricing

The most-commonly ordered eValid feature key collections are now
available as discounted eValid bundles.  See the new bundle pricing
at:  <http://www.soft.com/eValid/Products/bundle.pricelist.4.html>

Or, if you like, you can compose your own feature "bundle" by
checking the pricing at:
<http://www.soft.com/eValid/Products/feature.pricelist.4.html> Check
out the complete feature descriptions at:
<http://www.soft.com/eValid/Products/Documentation.40/release.4.0.html>

Tell us the combination of features you want and we'll work out an
attractive discounted quote for you!  Send email to  and be assured of a prompt reply.

========================================================================

tce 3 Testing: Seven Deadly Virtues And Seven Cardinal Sins by Boris
Beizer, Ph.D.

      Note:  This article is taken from a unpublished
      collection of Dr. Boris Beizer's essays entitled
      "Software Quality Reflections" and is reprinted with
      permission of the author.  You can contact Dr. Beizer at
      .

1.1 Seven Cardinal Testing Sins -- General

My first list was much longer than seven items because there are
many more things one can do to make testing fail than there are to
make testing work. Anyhow, here's the seven worst things you can do
to hurt testing.

1.2. Organize Your Tests Along Functionally Meaningful Lines.

Testers, as programmers, should have organized minds and should keep
the user's needs foremost. It seems natural, therefore, to group
tests according the functionality that the users sees. It may seem
natural but it can be, and often is, counterproductive; leading to
excessive testing effort and therefore, buggier software. Software
maps features (i.e., functionality) onto programs. The mapping isn't
one-one. Typically, 75% or more of the routines serve most features,
or no single feature directly and few features are implemented by
single or small groups of specialized routines. Infrastucture such
as storage allocation, file management, input command parsing,
printing, and interface with the OS, cut across all features. Users
would like to have a neat mapping of specific features onto specific
routines because that would be easier to understand-but the price of
clarity is a big, slow, buggy, package.

Testing resources should be allocated proportionately to the size
and complexity of the code under test. Consequently, most testing
effort is apportioned not to the functionality of the package (the
stuff that usually works) but to that other 75% indirect stuff. The
first consequence of a functionally meaningful organization to the
tests is that 75% or more of the tests are constrained to an
essentially arbitrary structure that cuts across too many
components. A consequence of that is that you won't be able to begin
testing until almost all of the software is integrated and working.
A third consequence of a functional dominance to the test
organization is that different tests are likely to be based on
different test techniques rather than the specifics of the
application. An application orientation means that you'll have a
syntax test followed by a state-machine test, then a domain test,
etc., with the consequence that you have to bop back and forth
between different test tools: in itself, a major hindrance to both
test design and test execution automation. A more practical
organization of test suites is:

1.  The driver used to execute the test.

1.1.  Within that, the technique used to create the test.

1.1.1.  Within that, component groups that implement the feature.

1.1.1.1.  Within that, the functionality seen by the user.

Serving the users' needs doesn't mean doing things the way the user
expects us to. The users' needs are ill-served by the programmer who
takes design and architectural instructions from a non-programming
user. Similarly, the users' quality needs are ill served by the
tester who strives to please by organizing tests along the users'
naive, preconceived ideas of how testing should be organized.

1.3. Design Your Tests To Achieve Coverage.

"Cover" is probably the most important word in the tester's lexicon.
If you don't use some kind of structural coverage metric such as
branch cover, you can't have a quantitative notion of how thorough
your testing has been. Am I suggesting that we now drop the use of
structural coverage metrics? No! We still want to use whatever
coverage metrics we can, such as statement, branch, predicate
condition, and data-flow.

The question is not the coverage metrics we use but the basis of our
test design. If we use a specific metric it does not follow that our
tests should be designed with the central purpose of achieving that
metric. Test cases, especially unit test cases, designed for the
primary purpose of achieving coverage tend to be unnatural,
difficult (if not impossible) to sensitize, and too far removed from
the routines' specifications. If the programmer really understands
the routine's requirements and has done a good job of testing, then
basic coverage metrics such as branch coverage should be easily
achieved by a set of behaviorally derived test cases. At worst, the
programmer may have to add a few cases to cover some compound
predicate conditions. The idea then, is to design test cases from a
behavioral point of view with an intent to meet all the requirements
for that component, but to monitor the completeness of the tests by
using a variety of structural coverage metrics and associated tools.

1.4. Test The Combination Of Extreme Values.

Most programs have numerical inputs and/or numerical configuration
parameters. These parameters have minimum and maximum values. It is
a testing truism that tests that probe values at or around (just
less, just more) these extreme values are productive. That truth
hasn't changed. The question is not what we do for single numerical
values but for combinations of numerical values. If a single
parameter has two extremes (minimum and maximum) and we test these
extremes properly, we'll have six test cases to run (min-, min,
min+, max-, max, max+). For two parameters, that's 6 x 6 or 36 cases
to try and for n parameters, that 6n. That's an exponential test
growth. That's over 60 million test cases for ten parameters. These
are easy test cases to generate automatically, but even so, at one-
hundred test per second, that's 168 hours of testing. And 10
parameters isn't a lot. I'm sure you have many routines or programs
in which the parameter count is in the 50-100 range. That works out
to 2 x 1068 years or about 2x1058 times longer than the estimated
age of the universe. Even automation won't crack that nut. And if
you only did the extreme (min and max) that's just 2n and a mere 2 x
1011 universe lifetimes.

There are several things wrong with willy-nilly combinations of
extreme values even if test generation and execution are fully
automated. The worst two of problems (beyond mere impossibility for
realistic numbers of parameters) are: 1) false confidence and 2)
it's usually a bad bet.

The false confidence comes from running tens of thousands of tests,
most of which won't find bugs. Like it or not, we tend to be
impressed by test quantities rather than quality. Even experienced
testers get impressed. You can't test for 1068 years, so you knock
off a few billion tests over the next several months. But the test
you ran represent a statistically insignificant portion of the input
space and no confidence, statistical or otherwise, is warranted.

It's a bad bet because you're asserting that there is a functional
relation between the parameters, or even worse, that there is a bug
that will create a functional relation between parameters that were
not otherwise related. That's more likely to be sabotage than a bug.
If there is a functional relation between parameters, then you
should test the correctness of that relation (e.g., the equation
that characterizes the relation) and not just the extreme point
combinations. You test extreme point combinations only if there is a
known functional relation between the parameters or if there is a
high likelihood of a bug that could create such a relation.
Otherwise, testing combinations of extreme points isn't much better
than attempting all possible input values.

1.5. Base Your Tests On Previous Bug Statistics

The main purpose of testing is to discover bugs. In every test
method there is an implicit assumption about the kinds of bugs
you're likely to find. If for example, you have lots of domain bugs,
then domain testing is implied. Similarly, a weak front-end for
command-driven software implies syntax testing. It would seem,
therefore, reasonable to use the history of past bugs as a basis for
selecting testing methods.  It is, up to a point.

The point is the pesticide paradox. If QA is effective, the
programmers learn from their mistakes and are given the means to
correct previous bugs of the same kind and to prevent such bugs in
the future. It follows that the techniques used to discover those
bugs are no longer as effective because those kinds of bugs no
longer exist. Monitoring bug type statistics is an important job for
QA. If the statistics are stable (i.e., same kinds of bugs) over
several releases, then the programmers aren't learning or are
haven't been given the means to profit from what they've learned.
Then, past bug statistics as a basis for test design is a good idea,
but only because there's a major process flaw. The faster the
programmers learn and react to bug statistics, the worse a guide
will past bug statistics be for test design.

1.6. Focus on Test Technique That Worked Best In The Past.

This is the same issue as basing tests on bug statistics. The
pesticide paradox again. It seems reasonable to examine past testing
methods and to concentrate on those that have been most cost-
effective at finding bugs in the past. This would be okay but for
the pesticide paradox. The techniques wear out. A classical example
is afforded by stress testing. When first used, this method is very
effective: in fact, one of the most cost-effective test methods
there is. Because the bugs found are in a small set having to do
with interrupts, synchronization, timings, critical races,
priorities, and operating system calls, they are quickly discovered
and fixed. Thereafter, stress testing loses much of its past
effectiveness. Programmers aren't fools and they're not about to let
themselves be hit over and over again by the same test methods. They
will adopt design practices and designs that eliminate the bugs
found by your favorite, most cost-effective method. The consequence
of not recognizing this is false confidence.

1.7. Pick The Best Test Tools And Stick To Them.

Many of you have gone through the trauma of tool chaos in which
every department or project buys or builds their own tools
independently of any other group. Besides being confusing, you pay
more for site licenses, training, and all the rest. It seems
reasonable to minimize tool chaos by standardizing on a few basic
tools. That's okay if you're not too serious about it and don't
expect too much in the long run.

The test tool business is unstable. Tools are changing quickly and
so are tool vendors. Just as the rest of the software industry is
now engaged in heavy consolidation, so is the software test tools
industry.

Furthermore, test tools by themselves have no long-term viability
unless they become part of a comprehensive software development
environment. Development tool vendors have realized this and have
turned their attention to test tools. Tool vendors are merging,
being bought, or going out of business.

The situation is so fluid today that any hope for long-term usage of
the current best tool is wishful thinking. A more realistic approach
to tool buying is to expect them to be useful for at most three
years. With that in mind, it may pay to allow some tool chaos for
the present to hedge your bets with several vendors so that you are
better positioned to exploit what eventually does emerge as the
standards.

1.8. Be Realistic.

A common angry protest from software developers and users is that
test cases are unrealistic. The developer protests that you're
stressing the software too much in the wrong way, in ways that don't
represent realistic use -- I sure hope so. The user complains that
because so much of your testing is based on weird situations, that
they don't have confidence that the software will work as intended
when actually put to use -- the user is also right. But they are
also both wrong. They are both wrong because they are confusing two
different test objectives. If you make (most of) your tests
realistic, you'll be destroying the effectiveness of the one
objective without accomplishing the other. The two objectives are:
(1) testing to reveal bugs and (2) testing to support statistically
valid inferences about the product's fitness for use in a real
environment.

For the first objective, note that most of the code (e.g., 75%)
concerns low probability but potentially harmful situations. In a
statistical sense, much of the code is fundamentally unrealistic. As
testers, we must test every line of code (or see to it that someone
has done it). That means exercising many conditions that are
unlikely to come up in any one user's experience. The programmer who
protests that we're being unrealistic is contradicting herself
because much of her code is "unrealistic" in the same sense. Another
reason for using "unrealistic" tests is that realistically, that's
often the only way to force certain low probability timing and race
conditions to which our software must be invulnerable.

Realism in testing (for objective 1) is a hang-up that gets in the
way of good testing. In some software-driven electromechanical
systems, for example, the only way to force timing problems is to
remove the hardware and to test the software in a simulator that
allows gears and things to "move" several times faster than the
speed of light. Realism also gets in the way of test automation.
Remember, we're testing software, not the larger system in which the
software may be embedded. Testing realism might mean building
complicated robots that punch keys, turn dials, pour blood into test
tubes, etc. These aren't needed to test software. A false limitation
of software testing to realistic hardware and/or system constraints,
can for an embedded system, eliminate 50%-85% of all the tests that
should be run.

The second objective, testing to make statistically valid inferences
about the software's fitness for use in the field, must be based on
realistic tests. It is well known that tests designed to find bugs
can not be used to make statistically valid inference about the
software's probability of failure in use. To do that we must have
user behavioral data (called user profiles) and embed the software
package in a simulator and drive that simulator with properly
generated random events that statistically matches the user's
behavior. Such testing is realistic (in a statistical sense) but
usually entails only 5% to 15% of the kind of tests we run to find
bugs.

2. Seven Deadly Testing Virtues

2.1. General

Sin is always easier to find than virtue. Abraham couldn't find ten
good men in Sodom, and that was after arguing God down from 50. I
thought for a long time about the seven best things one could do to
improve testing in an organization and came up with this short listi
of items.

2.2. Ignore Performance And Throughput.

More bad software has been wrought in the name of performance than
any other reason. While the operating system and the software's
system's architecture and algorithms can have a profound effect on
performance, today, there's almost nothing that the individual
programmer can do at the coding level to improve performance. In
fact, most attempts to improve performance by coding tricks reduces
rather than enhances performance.

It's the day of optimizing compilers folks. Optimizers, like John
Henry's steam drills, beats humans every time. Trying to beat the
optimizer in a modern system means that you can second guess the
highly secretive optimization algorithms, the operating system, and
the caching hardware and firmware. That's a blissfully arrogant
wish. Code level attempts at optimization are usually
counterproductive because they result in strange code that the
optimizer can't handle well.

The modern view is to code in the cleanest, simplest, most
straightforward manner you can. You then measure the software's
actual behavior using statistically valid user profiles, find the
hot spots, if any, and then tweak the few lines of code that may be
causing problems. Trying to tweak code blindly is an out-of-date
habit that should be discarded.

2.3. Don't Write Defensive Code.

We're treading on holy ground here. Haven't we all been taught that
good software is defensive; that good software defends itself
against the vagaries of input errors? We're not suggesting that we
drop all defenses. The question is what are we defending against? If
we look at much of contemporary software we find low-level internal
routines that have no direct interface with the outside world
(users, say) protecting themselves against the vagaries of
potentially harmful internally generated data. That's programming
under the assumption that your fellow programmer is a fool at best
or a cad at worst. That's programming under the assumption that
there is no effective process; that there are no inspections; that
there hasn't been any effective unit testing. Programming in order
to cover your ass and fix blame rather than in order to fix
problems. The trouble with this kind of code is that there are so
many idiosyncratic notions of danger that there's no way to test
that code.

I'm not suggesting that we stop protecting our software against
outside inputs and user errors. Such protection is essential. Nor
should we reduce protection against faulty inputs from networks or
other systems. However, protecting the software from external abuse
is not the same as protecting it from internal abuse and bugs. If
you build-in self-protection from internal problems such as bugs,
you are doing on-line debugging in the user's shop and at the user's
expense. That's no way to treat a user.

2.4. Encourage The Programmers To Cheat.

There's a too common, but erroneous belief, that independent testing
means that programmers should be kept ignorant of the test cases
that the independent testers intends to use so that the programmers
can't cheat. The converse of that ill-founded belief is the equally
ineffectual belief that independent testers should not know the
workings of the software they test lest they become seduced by the
programmers' logic. Both of these beliefs are counterproductive to
good software.

Testers can't do a good job unless they know something about how the
software is built. The specifics of the design often dictates what
technique to use and what technique is unlikely to be productive at
finding bugs. For example, domain testing is not a very effective
bug catcher if the programmers have made all domain boundary
parameters table-driven. Similarly, syntax testing will rarely find
bugs if attempted against a generated parser. Independent testers
should not be so weak-willed that knowledge will corrupt them or
make them less effective. For most independent testers, the more
they know about the software, the more effective their tests will be
at exposing bugs.

Now for the converse. Should the programmers know what the testers
have in store for them? It is possible for a dishonest programmer to
cheat and fake test results. But dishonesty will manifest itself no
matter what safeguards you impose. All testing is based on the
"competent programmer" hypothesis. This hypothesis states that we're
dealing with "natural bugs" rather than sabotage. Similarly, though
never stated, we also assume that the programmer is honest in
addition to being competent. If a programmer is honest, competent,
and given the time and resources needed, then if we give them
detailed information on the tests we've planned, they have the
opportunity to examine their software in that light and will often
make design changes that avoid the potential problems for which our
tests were fishing. Isn't that what we want to accomplish? To find
and fix the bug before it's written in? Just as more software
knowledge means better tests, more test knowledge means better
software. And if the programmers "cheat" by avoiding all the traps
we've laid for them, then as testers it is our responsibility to
escalate the game (and the software's quality) by thinking of even
subtler conditions and corresponding tests.

2.5. Don't Do Configuration Compatibility Testing.

One of the big bugaboos of current software is the configuration
compatibility problem and how to test for it. How, using PC software
as an example (because that's where the problem is most acute),
should we test to assure that this software will work in every
possible hardware and software configuration that it might face in
the present or in the future and how do we determine by testing that
this software can coexist with every other piece of software or
application package with which it may cohabit?

These are tough problems. As a personal example, I once went through
three motherboards and several BIOS chips, and had my main computer
out of action for almost four months on such compatibility issues.
Firmware in two different SCSI controllers disagreed about how
things should be done with the consequence that one stepped on the
other, causing it to send control sequences to the BIOS (which was a
flash-ROM instead of a true ROM), that corrupted the BIOS. Dozens of
total system rebuilds, lots of hardware replacement, hundreds of
wasted hours, only to find out that a seeming hardware problem
turned out to be software after all. And a configuration
compatibility bug at that. With this experience, shouldn't I be the
one screaming for complete configuration compatibility testing? If
those vendors had done their job, wouldn't the bug have been
discovered earlier and my horrible experience avoided? No to both: I
still say don't do configuration compatibility testing because it
wouldn't have found the problem even if it had been done.

The problem with configuration compatibility testing is that it is
virtually infinite. There are almost 200 million PC and work
stations out there and not one of them is like mine. If you decide
to just try a sample of configurations, and do an honest job about
trying the combinations and variations (remember, it's not just what
programs coexist, but the order in which they are loaded), then you
rapidly come to the conclusion that the job is impossible even for
one computer with one set of software and one set of static
hardware.

Now please, don't misunderstand me. Programs should interact with
other programs in a predictable way. If your software has direct
interfaces with other programs (such as export and import,
communications, data structures, backup, etc.) then you must test
those interfaces. You'd be a fool not to. I'm not talking about
direct interfaces or interfaces with virtual processors such as the
operating system, network managers, etc. Test the direct interfaces,
test them often, and test them well.

I'm talking about configuration compatibility with other software
that has no direct interface. For example, two supposedly
independent SCSI controllers, or my fax program that must be loaded
last, or a sorry backup program that can't be installed from any
drive but A: if there's a CD-ROM on the system. Pick your favorites
from the infinite list. Okay, so why not do configuration
compatibility testing with the programs most likely to coexist. Say
against the five most popular word processors, spreadsheets,
graphics, etc. Several reasons. First, the list is big, not small.
If you pick the "most popular" and attempt to get say, 98% coverage,
you'll be thinking in terms of cross-testing against several hundred
packages rather a dozen. Second, if there's a compatibility problem
its not just what you are doing but what the other program is doing
that causes the problem. That's not just testing against static
packages that just sit there, but dynamically. More precisely, it's
not just testing against several hundred packages but against
several hundred packages (squared) each of which can be in any of
several hundred states (squared).

For 250 packages each of which can be in 100 states, and assuming
100 test cases per set-up, executed at 100 tests per second we have
almost twenty years of testing. If you take the load order of the
programs into account, multiply the above by about 1017 for a mere
107 universe lifetimes.  The combinatorics are awful and whatever
testing you do, is a statistically insignificant sample of the realm
of possibilities. You don't do configuration compatibility testing
because you really can't and because attempting to do it just leads
to false confidence.

What should you do? Configuration compatibility problems come about
because a programmer (yours or the other package) break the rules of
the environment. For example, a programmer bypasses the operating
system and does direct I/O. Configuration compatibility problems
would be much rarer if marginally knowledgeable programmers didn't
try to second guess the operating system and its programmers.

Usually, such gaffes are done in the name of improved performance --
and we've seen how misguided that attempt is usually. Instead of
doing configuration compatibility testing, spend the money on better
testing of the direct interfaces your software has with the
operating system, data base packages, networks, etc. Spend the money
on more thorough inspections to assure that your software's
programmers don't break the virtual processor rules. Spend the money
on marginal defensiveness that will help you determine if your
software has been stepped on. Remember that the stepper software
rarely gets the blame. It's the "stepee" that crashes and is blamed.
Spend the money to use and promote industry interworking standards.
But don't spend it doing useless hard testing that leads to
unwarranted confidence.

2.6.   Do Away With Beta Testing.

Heresy! Beta testing seems to be firmly entrenched as the last
bastion against rampaging bugs. Beta testing assures that bad bugs
don't get to the user. Lets see an alternate view of beta testing.
Beta testing is effective only when the internal testing isn't. Some
major players in the software game have learned that with good
internal unit testing, good integration testing, and thorough in-
house system testing, that the bugs found by beta testing aren't
worth the cost of administrating a beta test effort even if the
"testers" work unpaid. Beta testing isn't free. You have to ship
them the software. You have to provide above normal support at no
cost to them. You have to have enough additional ports to handle the
load. If the software is well tested, beta testers don't find much
that your own testers and programmers haven't found earlier.
Furthermore, they report hundreds of symptoms for the same bug and
it's a big job to correlate all those trouble reports and find out
that you've been talking about the same bug that has long ago been
scheduled for repair.

But that's not the most insidious thing about beta testing. You
can't really do away with beta testing because the mythic
effectiveness of beta testing is so strong that the trade press
won't let you get away without it. The worst part about beta testing
is the pesticide paradox and programmers' immunity. Beta testers
wear out. Because most beta testers aren't professional testers,
they're unlikely to change their behavior or how they go about
finding bugs. But the bugs they could have found are now gone. So
you get a clean bill of health from the beta testers and you
(falsely) confidently believe that your software is now robust.
Wrong! It's only robust against this group of beta testers and not
the general user population.

The second insidious thing about beta testing and testers is that
they are vulnerable to what I have called "programmers' immunity."
This is the phenomenon in which programmers or testers
subconsciously modify their behavior in order to avoid bugs that
they are barely, consciously, aware of. It's amazing how quickly the
subconscious mind teaches us how to walk through a mine field. Beta
testers, just as programmers and in-house testers, want the software
to work, and judging from how often software that flies through beta
testing with honors crashes in the field, to some extent or another,
programmers' immunity must also be at work. Automated testing gets
minimizes the effects of programmers' immunity, but most beta
testers aren't automated and consequently, for them, the immunity is
likely to be more prevalent.  False confidence again.

Gather some statistics on your beta test effort and do some cost
justification for it (true total cost) and then determine if the
effort is worth continuing or if the money is better spent on
beefing up inspections, unit testing, and in-house system testing.

2.7. Keep The Users Out Of Testing.

More heresy? Doesn't he believe in user involvement? Sure I do; but
would you let the users design the software? They're not competent.
Nor are they competent to design all but a small part of the tests.

The main purpose of testing is to expose bugs and only secondarily
to demonstrate that the software works (to the extent that such a
demonstration is theoretically and/or practically possible). Users
focus on operational considerations far removed from the software's
guts. The user doesn't know about storage allocation so there's no
testing of overflow mechanisms. My experience with users is that
they have very little understanding of the software's internals and
as a consequence, most of their tests are very gentle and grossly
repetitive. Users don't want a test, they want a demo. A good demo
is at most 15% of the tests we should run. As for the question Is
the software doing the right job?a question in which the user should
be involvedthat should have been settled early in the game by
prototypes. The user should be invited to exercise (but not test)
the prototype so that we know that what we build is really what was
wanted. But exercising a prototype is hardly the same as testing
software.

The users have learned that they should keep out of the programmers'
hair if they want good software. But the same users, because they
don't know testing technology, or even that there is a such a thing
as testing technology, mistakenly believe that they know something
about testing. That's the trouble: they know a thing or two when
what they need to know to do effective testing are ten-thousand
things. We've must keep plugging away on this one and educate the
user that testing is a technical discipline as rigorous as
programming and that if they're not competent to do programming,
they're not competent to do testing.

2.8. Do Away With Your Independent Test Groups.

I always leave the most controversial to last. I don't believe in
independent test groups and I never have, despite the fact that I
have at times pushed it very hard and have often fought for it
(sometimes without success). Independent test groups are an immature
phase in the evolution of the software development process through
which most organizations must go. I don't believe in independent
test groups because I don't believe in testing. Testing is quality
control. It finds bugs after they've been put in. As a long-term
goal, we must strive to avoid or prevent the bugs, not just find
them and fix them. I don't believe in testing because every time a
test reveals a bug, that's one more opportunity to prevent a bug
that we somehow missed. We will keep on finding and deploying better
bug prevention methods, but as we do, we'll also keep building more
complicated and subtler software with even subtler bugs that demand,
in turn, even more powerful test methods. So while I don't believe
in testing, I also believe that some kind of testing will always be
with us.

Now how about independent test groups? Do we really want to do away
with that? Lets now distinguish between independent testing and
independent test groups. Independent test groups are wasteful
because it means that two different sets of people must learn the
software. Such groups are wasteful because the very independence
means that much testing will be redundant. I believe in independent
testing but not in independent test groups. Independent testing
means taking a different look at the software than taken by the
software' originator. That different look can be taken by a
different person or by a different group, or by a different
organization (e.g., an independent test group).  In mature
organizations with programmers that don't have their ego written all
over the software, individual programmers and members of the same
programming group can provide that independent look. It's more
efficient and need not compromise software quality or testing
effectiveness.

Then why independent test groups? They're essential for good
software and good testing in an immature organization in which bug-
finders (testers) are despised and punished while bug inserters
(programmers) are rewarded for fixing the very bugs they inserted.
Independent test groups are essential when the testers have to be
protected from the abuse of powerful and angry programmers.
Independent test groups are no longer needed when the culture has
matured to the point where such protection is not needed because the
abuse no longer exists. See how deep and how fresh are the testers'
scars. If they're recent and unhealed, then you'll have to bear with
the inefficiencies of independent test groups for a while longer. If
the scars are healed and barely visible, it's time to dismantle this
archaic stepping stone on the route to a quality process and quality
software.

But those are negative reasons for having independent testing.
Independent testing can be useful (here he goes contradicting
himself again) if the tester bring specialized expertise to the test
floor that the developers do not, or cannot conveniently have. That
is, if they add value. By specialized expertise I mean such things
as: localization, usability, performance, security, cross platform
compatibility, configuration, recovery, network, distributed
processing -- and a whole lot more. Whether such value-added
independent testing is done by an organizationally independent test
group is largely a parochial cultural thing and also a question of
process maturity.

3. Rethink Your Paradigms

Software and software development continue their rapid evolution --
as do testing and QA technologies.  Some of the best testing methods
of the past are three decades old.  They emerged in the mainframe
days of batch processing, expensive computation, and minuscule
memories. As a community devoted to quality, it would be blindly
arrogant of us to think and act as if the profound changes of just
the past decade had no effect on our methodologies. Commit to a
continual quantitative evaluation of your paradigms so that when the
world and technology changes, your methods change to match.

========================================================================

        VIEWS2003: MODELING AND DEVELOPING PROCESS-CENTRIC
               VIRTUAL ENTERPRISES WITH WEB-SERVICES

                       Special Session at the
                      7th World Conference on
        Integrated Design and Process Technology (IDPT 2003)
    "The Future of Software Engineering in the Networked World"
                          June 16-20, 2003
              Fragrant Hill Hotel -- Beijing -- China

             <http://www.dit.unitn.it/~aiellom/idpt03/>

                        SESSION DESCRIPTION

A new paradigm of computation is emerging in the networked world,
one based on service-oriented computing. As new standards and
practices for web-service technologies are materializing, the way
business processes are designed and used may radically change.
Emerging web-service standards offer exiting new ways for developing
and standardizing cooperative business processes, within or beyond
vertical domains.  This development allows companies to swiftly
architect extended virtual enterprises to support business
collaborations, which can last as short as a single business
transaction or span a couple of years.

Unfortunately, existing web-service standards tend to focus on
defining the semantics of processes as a set of collaborating "low-
level" interfaces, largely neglecting the actual business semantics
and pragmatics. In particular, most of the existing initiatives lack
effective mechanisms to make web-service collaborations a viable
solution in the business context, e.g., no mechanism currently exist
to allow non-deterministic planning of business operations and pro-
active business change. Moreover, the non-functional properties,
which are believed to make up the largest part of the design of a
web-service, are largely ignored by both the industrial and academic
community.  Especially in the case of web-service orchestrations
that support virtual enterprises, possibly along the axis of a
virtual value chain, forecasting the properties of a web-service by
investigating the non-functional properties of its constituents, is
crucial. Lastly, and possibly most importantly, the convergence of
web-services and business process management requires that process-
centric (extended) enterprises carefully commence with aligning
processes and web-services.

This session focuses on both scientific and new industrial solutions
for the problems that are outlined above.  The session focuses at
web-service computing, the interrelation and interaction of business
processes and web-services, and lastly, extending existing web-
service initiatives with constructs that allow them to become more
effective in highly volatile business contexts.

Topics of interest include (but are not limited to):

  * designing Web-Services (WSs) for supporting Business Processes
    (BPs), e.g., extending existing OO/CBD modeling and design
    approaches;
  * allowing BP and WS alignment, e.g., at the technical level by
    facilitating configuration management (traceability) and at the
    business level by defining new information planning techniques;
  * functional web-service support for business processes, e.g.,
    modeling languages and methodologies to develop BPs with WSs;
  * non-functional web-service support for business processes, e.g.,
    aligning non-functional properties of processes and WSs;
  * web-service orchestration both bottom-up (fusing WSs) and top-
    down (first designing processes and then matching process
    semantics with WS semantics)
  * web-service planning and coordination languages
  * web-service ontologies
  * extended enterprise design and implementation
  * transaction management, e.g., extending distributed transaction
    models to allow long-running business transaction protocols that
    relax traditional ACID properties;
  * business conversation models;
  * standardization efforts in vertical and horizontal business
    domains;
  * light-weighted workflow systems for modeling and enacting web-
    services;
  * legacy integration with web-services, e.g., wrappers, selective
    reuse of legacy functionality that is viable in new business
    contexts;
  * case tools supporting the design and construction of virtual
    enterprises.

                              CONTACT

Marco Aiello
aiellom@dit.unitn.it
http://www.dit.unitn.it/~aiellom
Department of Information and Communication Technology,
University of Trento
Via Sommarive, 14, 38050 Trento
Italy
Phone: +39 0461 88 2055
Fax:   +39 0461 88 2093

Willem-Jan van den Heuvel
W.J.A.M.vdnHeuvel@uvt.nl
http://infolab.uvt.nl/people/wjheuvel
InfoLab, Tilburg University
PO Box 90153, 5000 LE Tilburg,
The Netherlands
Phone: +31 13 466 2767
Fax :  +31 13 466 3069

========================================================================

                    Third International Workshop
                 on Web Based Collaboration: WBC'03

                       September 1 - 5, 2003
                      Prague (Czech Republic)

            <http://www.kti.ae.poznan.pl/dexa_wbc2003/>

In the last few years, the applicability and functionality of
systems for collaboration support has expanded, leading to their
growing application in organizational, communication, and
cooperation processes. This provides opportunities to study their
technical, business and social impacts.  Usually an integration of
incoming technology with existing organizational practices is very
challenging. The knowledge gained from such experiences is a
valuable resource for all those who plan to develop software tools
to support team interaction.

At the same time we observe a growing influence of World-Wide Web.
WWW - by now the most popular service over the Internet - evolves
rapidly, from a simple, read-only data storage system, as it was a
few years ago, to nowadays universal, distributed platform for
information exchange. New Web-based applications with freely
distributed data, end-users, servers, and clients, operating
worldwide, are central topics of many research activities. Recently
the WWW has also been perceived as an attractive base for a
distributed computer system supporting cooperative work, since the
Internet is the most flexible network with the architecture that
could support group activities to maximum extent.

In parallel to the WWW evolution we observe a growing impact of new
technologies: agent systems, mobile computing, workflow, and
ubiquitous computing. We can expect that these technologies will
also exert a large influence on group/organizational structures and
processes.

All the aforementioned emerging new technologies are exciting in
their own right, but their technological and organizational
integration to support collaboration raises many interesting
questions and is a challenging, new research agenda.

WBC'2003 is a continuation of the previous successful workshops on
Web Based Collaboration, organized in September 2001 in Munich,
Germany (http://www.kti.ae.poznan.pl/dexa_wbc/), and recently, in
September 2002 in Aix-en-Provence, France
(http://www.kti.ae.poznan.pl/dexa_wbc2002).

WBC'2003 attempts to integrate two themes of practice and research:
the functional, organizational, and behavioral issues and the
modeling or implementation issues associated with collaborative Web
based work. WBC'2003 brings together practitioners and researchers
from different domains working on design, development, management,
and deployment of Web-based systems supporting teamwork within
organizations.

TOPICS OF INTEREST:

Papers are solicited on the enabling information technologies with
an emphasis on their relation to collaboration.  The relevant topics
may include the following (but not limited to):

  - Collaborative Systems: strategies of human cooperation over the
    Web, computer platforms and architectures in support of remote
    cooperation, mediators, wrappers, design and implementation
    issues of collaborative applications;

  - Agent Technologies: agents supporting cooperation, agents for
    finding, collecting and collating data on the Web, brokering
    agents;

  - Interoperability Infrastructures: compositional software
    architectures in support of collaboration, combining distributed
    object management platforms with Web and Java for cooperative
    applications, middleware infrastructures, describing metadata on
    the Web, providing semantic interoperability through metadata,
    emerging interoperability standards;

  - Dataweb Technology and Database Infrastructure for
    collaboration: Web access to databases including Java Database
    Connectivity, database Web servers, Web interfaces to databases,
    database Web applications;

  - Workflow Systems: workflow architectures in support of
    collaboration processes, modeling of cooperation processes,
    truly distributed enactment services and interoperability of
    workflow engines, dynamic modification of running workflows;

  - Electronic Business: establishment of contacts, suppliers search
    and negotiation, contract negotiations, Business-to-Business and
    Business-to-Employee cooperation support, establishment and
    coordination of virtual enterprises, shared business processes.

CONTACT:

Prof. Waldemar Wieczerzycki
Department of Information Technology
The Poznan University of Economics
Mansfelda 4
60-854 Poznan
POLAND

phone:    (48)(61) 848.05.49
fax:      (48)(61) 848.38.40
e-mail:   wiecz@kti.ae.poznan.pl
www:      <http://www.kti.ae.poznan.pl>

========================================================================

              International Workshop on Web Services:
       Modeling, Architecture and Infrastructure (WSMAI 2003)
                  April 22, 2003 - Angers, France
       <http://www.iceis.org/workshops/wsi/wsi2003-cfp.html>

Program Chairs/Contacts:

- Jean Bezivin
  University of Nantes - France
  Jean.Bezivin@sciences.univ-nantes.fr
  http://www.sciences.univ-nantes.fr/info/lrsg/Pages_perso/JB/HP_JB_2001.htm

- Jiankun Hu
  Royale Melbourne Institute of Technology - Australia
  jiankun.hu@rmit.edu.au
  http://goanna.cs.rmit.edu.au/~jiankun/


- Zahir Tari
  Royale Melbourne Institute of Technology - Australia
  zahirt@cs.rmit.edu.au
  http://www.cs.rmit.edu.au/eCDS/members/index.html

Workshop Background and Goals:

"A Web service is a software application identified by a URI, whose
interfaces and bindings are capable of being defined, described, and
discovered as XML artifacts. A Web service supports direct
interactions with other software agents using XML based messages
exchanged via internet-based protocols" [W3C].

Web Services is basically supported by four technologies XML, SOAP,
WSDL and UDDI. Extensible Markup Language (XML) is utilized to
normalize the exchange of business data among trading partners by
providing an excellent means for data encoding and data formatting.
Simple Object Access Protocol (SOAP) defines a simple way to package
information and send it across system boundaries. Web Services
Description Language (WSDL) is an XML language for describing Web
Services, enabling one to separate the description of the abstract
functionality offered by a service from concrete details of a
service description.  Universal Description, Discovery, and
Integration (UDDI) enables businesses to quickly, easily, and
dynamically find and transact with one another. These standards are
backed by the major players in the industries such as IBM,
Microsoft, BEA, Sun and Oracle.  However, only this set of
technologies is not sufficient to develop real applications based on
the Web.    Development of real-life applications relies heavily on
the system model and architectures through which this set of
technologies are efficiently implemented.  The goal of this workshop
is to bring together researchers, engineers and practitioners
interested in Web Services to discuss how to   develop real
applications on the Web.

This workshop covers the fields of modeling, architectures and
infrastructures for Web Services.


Topics of interest include, but are not limited to:

- Web Services Modeling
- The Model Driven Architecture (MDA) and Web Services
- Web Services and Semantic Description
- Ontologies for Web Services
- Web Services Architecture
- Frameworks for Building Web Service Applications
- Composite Web Service Creation and Enabling Infrastructures
- Web Service Discovery
- Solution and Resource Management for Web Services
- Dynamic Invocation Mechanisms for Web Services
- Quality of Service for Web Services
- SOAP and UDDI Enhancements
- Case Studies for Web Services
- E-Commerce Applications Using Web Services
- Workflow and Web Services
- Transaction and Web Services
- Web Services Interoperability
- Sumi Helal, University of Florida (USA)
- Xavier Blanc, LIP6, Universiti de Paris (France)
- Yetongnon Kokou, Universiti de Bourgogne (France)

========================================================================

                 The Agile Development Conference

                          June 25-28, 2003
                     Salt Lake City, Utah  USA
                

Agile Development is a conference aimed at exploring the human and
social issues involved in software development and the consequences
of the agile approach to developing software. The agile approach
focuses on delivering business value early in the project lifetime
and being able to incorporate late breaking requirements changes by
accentuating the use of rich, informal communication channels and
frequent delivery of running, tested systems, and attending to the
human component of software development. A number of techniques and
processes have been identified in the use of agile approaches, and
we expect more to be found.

The purpose of this conference is to examine closely the proposed
processes and techniques, to report the outcome of studies on human
issues affecting the speed and quality of the development, and to
collect field reports from projects using agile approaches. The
social-technical goal of the conference is to increase the exchange
on these issues between researchers and practitioners, between
managers and developers, between social specialists and computer
specialists.

We solicit research papers, experience reports, tutorials, and
technical exchange topics around the four themes of the conference:

  > The People Side Of Developmennt
  > Agile Methods and Processes
  > Agile Development and Programming
  > The Business Side Of Agile Development

RESEARCH PAPERS

A research paper relates the outcome of a study on the selected
topic, and possible extrapolations from those studies.

Topics may include, but are not restricted to:

  > Research on named or new methodologies and approaches: Adaptive,
    Crystal, DSDM, FDD, Scrum, XP, informal modeling techniques and
    practices, adapting/ trimming existing methods, special projects
    or others.
  > Research on named or new techniques or practices: pair
    programming, war-rooms, test-first design, paper-based
    prototyping, or others.
  > Research on special topics or tools: configuration management,
    testing, project steering, user involvement, design for agility,
    or others.
  > Studies of development groups using ethnographic or social
    research techniques.
  > Quantitative and qualitative studies on particular topics: tools
    for virtual teams, relationship of community issues to project
    outcome, or others.  In addition, we welcome papers for a
    session called "Throwing the gauntlet."
  > Papers for this session should issue a challenge to the agile
    community based on data or extrapolation, to cause the agile
    community to reexamine its assumptions, stretch its thinking, or
    develop new approaches over the following year.

========================================================================

     Special Invitation: NJCSE Workshop on Software Reliability

The NJCSE will sponsor a workshop for 20 or so NJ software
professionals dealing with Software Reliability.   Chandra Kinatala,
John Musa and Prof.  Lui Sha of Univ. of Illinois will be there.

Please plan to come and participate.  You need to prepare a 1-3 page
position paper on the state of software reliability, my paper, the
opportunity for organization collaboration, the practical importance
of reliability to commercial success of software products, the
relationship between reliability and security, or any other topic
related to software reliability.  These position papers will be
posted on our web site and I will seek opportunities to publish
them- with your approval of course.

Contact:

Larry Bernstein
Industry Research Professor
Lieb 103, Computer Science
Stevens Institute of Technology
Castle Point, Hoboken, NJ 07030
<http://attila.stevens-tech.edu/~lbernste>
Email: lbernste@stevens-tech.edu

========================================================================


========================================================================
    ------------>>> QTN ARTICLE SUBMITTAL POLICY <<<------------
========================================================================

QTN is E-mailed around the middle of each month to over 10,000
subscribers worldwide.  To have your event listed in an upcoming
issue E-mail a complete description and full details of your Call
for Papers or Call for Participation to .

QTN's submittal policy is:

o Submission deadlines indicated in "Calls for Papers" should
  provide at least a 1-month lead time from the QTN issue date.  For
  example, submission deadlines for "Calls for Papers" in the March
  issue of QTN On-Line should be for April and beyond.
o Length of submitted non-calendar items should not exceed 350 lines
  (about four pages).  Longer articles are OK but may be serialized.
o Length of submitted calendar items should not exceed 60 lines.
o Publication of submitted items is determined by Software Research,
  Inc., and may be edited for style and content as necessary.

DISCLAIMER:  Articles and items appearing in QTN represent the
opinions of their authors or submitters; QTN disclaims any
responsibility for their content.

TRADEMARKS:  eValid, STW, TestWorks, CAPBAK, SMARTS, EXDIFF,
STW/Regression, STW/Coverage, STW/Advisor, TCAT, and the SR logo are
trademarks or registered trademarks of Software Research, Inc. All
other systems are either trademarks or registered trademarks of
their respective companies.

========================================================================
        -------->>> QTN SUBSCRIPTION INFORMATION <<<--------
========================================================================

To SUBSCRIBE to QTN, to UNSUBSCRIBE a current subscription, to
CHANGE an address (an UNSUBSCRIBE and a SUBSCRIBE combined) please
use the convenient Subscribe/Unsubscribe facility at:

       <http://www.soft.com/News/QTN-Online/subscribe.html>.

As a backup you may send Email direct to  as follows:

   TO SUBSCRIBE: Include this phrase in the body of your message:
           subscribe 

   TO UNSUBSCRIBE: Include this phrase in the body of your message:
           unsubscribe 

Please, when using either method to subscribe or unsubscribe, type
the  exactly and completely.  Requests to unsubscribe
that do not match an email address on the subscriber list are
ignored.

               QUALITY TECHNIQUES NEWSLETTER
	       Software Research, Inc.
	       1663 Mission Street, Suite 400
	       San Francisco, CA  94103  USA
	       
	       Phone:     +1 (415) 861-2800
	       Toll Free: +1 (800) 942-SOFT (USA Only)
	       Fax:       +1 (415) 861-9801
	       Email:     qtn@soft.com
	       Web:       <http://www.soft.com/News/QTN-Online>