sss ssss      rrrrrrrrrrr
                      ssss    ss       rrrr   rrrr
                     sssss     s       rrrr    rrrr
                     ssssss            rrrr    rrrr
                      ssssssss         rrrr   rrrr
                          ssssss       rrrrrrrrr
                    s      ssssss      rrrr  rrrr
                    ss      sssss      rrrr   rrrr
                    sss    sssss       rrrr    rrrr
                    s  sssssss        rrrrr     rrrrr
         +=======    Quality Techniques Newsletter    =======+
         +=======             June 2001               =======+

QUALITY TECHNIQUES NEWSLETTER (QTN) is E-mailed monthly to Subscribers
worldwide to support the Software Research, Inc. (SR), TestWorks,
QualityLabs, and eValid user communities and other interested parties to
provide information of general use to the worldwide internet and
software quality and testing community.

Permission to copy and/or re-distribute is granted, and secondary
circulation is encouraged by recipients of QTN provided that the entire
document/file is kept intact and this complete copyright notice appears
with it in all copies.  Information on how to subscribe or unsubscribe
is at the end of this issue.  (c) Copyright 2003 by Software Research,


                         Contents of This Issue

   o  Software Is Different, by Boris Beizer (Part 3 of 3)

   o  A Comment on Boris Beizers paper "Software is Different", by Hans

   o  QWE2001: 5th Internet and Software Quality Week Europe (12-16
      November 2001)

   o  A Return to the Fundamentals in the OO's, by Danny R. Faught

   o  The Vulnerabilities of Developing on the Net, Robert A. Martin
      (MITRE Corporation) (Part 1 of 2)

   o  AQuIS 2002: Conference Announcement

   o  Software Process Improvement in the Small, by Robert P. Ward,
      Mohamed E.Fayad and Mauri Laitenen

   o  A Comment on ONeill's Review, by Don Mills

   o  Open Source Security Testing Methodology Manual, by Pete Herzog

   o  QTN Article Submittal, Subscription Information


                  Software Is Different (Part 3 of 3)
                              Boris Beizer

      Note:  This article is taken from a collection of Dr. Boris
      Beizer's essays "Software Quality Reflections" and is
      reprinted with permission of the author.  We plan to include
      additional items from this collection in future months.  You
      can contact Dr. Beizer at .

2.7. Quality of What?

In traditional engineering, quality is easy to define and measure.
Quality metrics fall into two broad categories: structural (tolerances)
and behavioral (operational failure rates). Also, there is generally an
empirical relation between tolerances (or rather, the lack thereof) and
failure rates. It is possible to say that if various parts are built to
specified tolerances then it follows that the failure rates will be
within specified bounds. The fact that such relations exist (be they
developed from theory or determined empirically) is fundamental to
statistical quality control of manufactured objects. There is no agreed
way to measure software quality and despite close to 30 years of trying,
no such way appears to be on the perceptual horizon. Here are some past
proposals and what is wrong with them.

   1. Bugs per Line of Code. There's no agreement that the most popular
      size metric, "lines of code," is the best metric to use (see 2.8
      below). Even if we adopted some program size metric such as
      compiled token count, what has bug density got to do with what the
      user sees? If most of the bugs are in low execution probability
      code, then what does it matter if the bug density is high? Unless,
      of course, it is life-critical software and that low probability
      code takes care of the one-in-million situation. Then it does
      matter. Bugs per line of code is a property of the software, but
      the failure rates the user sees is a property of the way that
      software is used: so we can't measure the code's quality unless we
      know how it will be used. All bugs are not equal. Some bugs, or
      rather their symptoms, are more severe than others. That also
      depends on expected user behavior. No existing quality measure
      today takes bug symptom severity into account. Finally, what is a
      bug? The answer to that one leads to deep ethical and
      philosophical issues debated for 4,000 years and software
      engineers and/or quality experts are unlikely to end the debate.
      So scratch that metric.

   2. Defect Detection Rate. Track the product and note the mean time
      between successive defect detections. When that time reaches a
      specified value, declare the software fit for use. That's not a
      measure of the software. It's a measure of the stamina,
      imagination, and intuition of the test group. The rate could be
      small because the testers ran out of ideas or are incompetent.

   3. User Perceived Failure Rate. This is the most promising measure,
      but it is as much a measure of the user as it is a measure of the
      software. I almost never use the graphics features of this word
      processor and I have never used the mini-spreadsheet in it. This
      word processor supports 34 languages, of which I use only one
      (American English). I probably use less than 30% of the features
      and 98% of what I do depends only 10% of the features I do use.
      The very flexibility of software and our ability to pack it with
      features means that any given user's behavior is unpredictable and
      therefore, so is any usage-based quality measure.

I could go on, but it would be a redundant recitation of software
engineering's state of ignorance when it comes to measuring quality.
The software quality issue is not that of the quality of a manufactured
object whose design is the result of engineering, but of the quality of
the engineering process itself. There is no evidence that civil
engineers make fewer mistakes than software engineers do. In fact,
software engineers probably make fewer mistakes than any other
engineering discipline. Ask the critic the next time they decry software
engineering's lack of suitable quality metrics, what metric do they use
to judge the quality of their engineering in contrast to the metric they
use for the quality of their products?

It is a fundamentally new problem that has first surfaced in software,
but that will undoubtedly become more important in other fields of
engineering as the complexity of engineered products inevitably
increases. We see this already in aviation. It is well known that
contemporary commercial aircraft disasters can rarely be attributed to a
single cause, but results from unfortunate conjunction of several
causes. For example, the accident is caused by: a failure of component X
AND abnormal weather AND the foreshortening of runway 25 AND the loss of
the NOTAM that should have warned the pilot of that fact AND ... FAA
accident reports are instructive reading because each factor must be
examined and a recommendation for its prevention made. Imagine if we had
to do a bug postmortem like FAA accident investigations and distribute
specific recommendations for that one bug to all concerned programmers?
Do it for every bug found? How much software would then get produced?

2.8. Quantifiability

Quantification in engineering is generally attributed to Galileo,
although the Egyptians, the Mayans, the Romans and later the Arabs were
darn quantitative centuries before. Whatever the genesis of quantified
engineering, it has been a fundamental part of the engineering paradigm
for at least four centuries. It is assumed in traditional engineering
fields that anything of interest can be quantified -- that is, reduced
to numbers; and if it can't be quantified, it isn't engineering

Not a bad assumption: it has been true for several centuries and has
always served engineers well in the past. But it is merely an assumption
-- a cherished belief, a pragmatic observation -- but not an immutable
fact. There is no evidence that this assumption of quantifiability
applies to software at all. And there is considerable evidence that it
does not apply.

There are many formal structures that cannot be quantified in the
ordinary sense of simple numbers, or even vectors of numbers. For
example: partly ordered sets, general relations, graphs. Quantification
implies comparison (e.g., either A>B, B> A or A=3DB): furthermore, in
most engineering, quantification means strict numerical comparison. But
some things just don't compare that way. The general rule is partial-
ordering rather than strict-ordering. There are many (infinite) ways to
order things and the strict ordering of traditional engineering
quantification is merely the oldest and the simplest. Furthermore, our
understanding of structures in computer science makes it clear that we
cannot willy-nilly assume that strict ordering applies. While it is
always possible to tag numbers onto partially ordered structures (e.g.,
leaf count, node count, depth, etc.) such numbers may not capture what
it is we want to capture by the use of numbers, no more than "lines of
code" captures what we mean by "complexity."

There is a huge literature on software metrics and software developers
gather a lot of numbers about their products; but that does not mean
that such metrics are fundamentally correct, accepted, or even useful.
There is a lot of ongoing research. There have been many attempts to
establish axiomatic foundations for software metrics; but none are
without flaws and without controversy. The state of the software metrics
art is at best in the pre-Galilean stage. What metrics there are, do not
scale, do not reliably transfer from one project to the next, do not
compose, to mention only a little of the ongoing controversy.
Furthermore, it may be that any notion of software metrics as we know
them, are fundamentally flawed [BAUE95]. At best, the options are
uncomfortable: what design restrictions must be adopted so as to make
software quantifiable (an unanswered, indeed, uninvestigated question)
and at worst, drop the notion of quantification for software and replace
it with something else -- also as yet undiscovered. To promote the idea
that quantification of software at present has the same solidity as
quantification in traditional engineering is a distortion of the facts,
misleading, and potentially dangerous.

Let's leave the formal math issues aside and restate it this way. To
insist on strictly ordered quantification (i.e., ordinary numbers) is to
eliminate from consideration most ways of measuring things in computer
science. The assumption that interesting and important aspects of
software can always be represented by numbers (traditional
quantification) gets in the way of developing the kind of quantification
appropriate to software, if any such quantification exists -- and it may
be that any notion of quantification will be incorrect. This is the
second hardest paradigm shift of all.

2.9. Adequate Knowledge

The most difficult paradigm shift of all is letting go of the notion
that we can have adequate knowledge. That too is an assumption. The
Eighteenth Century Rationalist [FERM45] model of the universe,
championed most eloquently by Rene Descarte, is with us yet. The
Rationalist model holds that the universe is a giant clockwork whose
gears and pinions are the physical laws. If you make enough measurements
of enough things, then in principle everything is predictable. That is,
it is always possible to get the information you need for any purpose
whatsoever -- whether it is worth doing is another issue that we won't
discuss here -- but it is, in principle, always possible.

Old Rene didn't have to deal with the Heisenberg uncertainty principle;
with Godel's Theorem; with chaos. The Heisenberg uncertainty principle
tells us that you can't make such measurements even if you wanted to.
Godel's theorem tells us that if you had the measurements, you might not
be able to do the calculation (or know that you had finished the
calculation). Chaos tells us that even insignificant errors or
variations in our measurements could lead to arbitrarily divergent
predictions. The theoretical ideal of adequate knowledge is based on
very shifty foundations. It applies on the common scale of physical
objects: but not for the very small (atoms and particles), not for the
very big (the universe), and not for the very complex (software).

The above merely fundamental problems aside, software has an uncertainty
principle of its own. We cannot predict how the changes in our software
will change the users' behavior. Sometimes we get complacent and think
that we can. A few years ago, who would have questioned the ultimate,
ongoing, perpetual dominance of the software industry by Microsoft? I
didn't. Did you? Yet, today, the arena has shifted to the Internet and
Microsoft looks more like a frightened elephant besieged by a pride of
hungry lions than Godzilla.

That's a dramatic example, but it happens daily on a smaller scale.  We
change our software in response to perceived market demands, which in
turn affects the market and the users' behavior, making any
quantification based in whole or in part on user behavior next to

That the users' behavior is unknowable is not the only barrier to
adequate knowledge. There are many others, of which combinatorial
explosion is perhaps the biggest. As Danny Faught [FAUG96] likes to
quote in his Internet signature: "Everything is deeply intertwingled."
The "intertwingling," meaning the combinatorially intractable potential
interaction of everything in software with everything, provides a
practical barrier (even with big computational iron) to what could be
analyzed even if all the facts were known and none of the fundamental
problems discussed above existed.

3. What We Can We Do About It

3.1. The World in Which We Live.

All of us together are unlikely to solve the open problems of software
engineering discussed above. We can't wait for the unborn Nakamuras to
publish their Five Laws of Software Engineering. We can't call for a
moratorium on software development (civil engineers tried that in the
early 19th century when iron bridges collapsed all over the landscape,
to no avail). Those are things we can't do. We can start to change our
own paradigms, then our managements' paradigms, and eventually, our
users' expectations: I have no hope of ever changing the marketeers' way
of thinking. Here are some of the ways in which we could do that.

3.2. The Development Option

We have tried, for fifty years now, with no notable success, to bring
rationality to software engineering. We've made progress alright, but so
have user demands. All of our productivity increases and methodology
improvements have just barely kept pace with the ever-increasing
complexity of the software we produce. When you couple that with
justified but continually rising user dependability expectations and the
increasingly lower technical level of our users (also expected), we have
been, and continue to fall behind. At some point we must ask if the old
way is the right way? Can we realistically expect Nakamura to come out
with the theorem that solves everything or Smith with the tool the fixes
everything up? I think not.

The bright light in the history of software engineering has been the
recognition that we must forego the fully generalized potential of
software and adopt design restrictions that makes it possible for us to
understand our software. We have done it in the past; here are a few
such restrictions:

    1. Structured programming.

    2. Strong typing and user defined semantic types.

    3. Avoidance of global data.

    4. Adoption of style rules and use of style checkers.

    5. Encapsulation.

Each of the above are restrictions on the way software is written. Each
exacts a toll in terms of execution time, program space, and most
important of all, personal ego. We can probably add ten more to the
above list of restrictions that are acceptable today. It is not whether
this is the right list or the ultimate list, but that such a list exists
and that programmers are willing to back down from the total freedom
(actually, chaos) of four and five decades ago. The list will expand, of
course, but will this do the job? No! Because a list of pragmatic design
restrictions avoids the questions of deriving design principles from
fundamentals (e.g., axiomatically). There has been no active search for
the right design restrictions. People, programmers and researchers, have
been saying "If we adopt design restriction X, then there will be fewer
bugs." That's doing it backwards. The question should be "What are the
design restrictions we need to make software constructable, testable,
and safe?"

3.3. Do Paradigm Checks.

When communications seems to be stalled and you are talking at cross-
purposes in a crossed transaction, run down a checklist of paradigms.
Ask "Are you assuming:"

    1. That Software Is Easy to Change?

    2. Bug Space Locality?

    3. Bug Time Locality?

    4. Proportionality Between Bug and Consequences?

    5. Bug Independence?

    6. Proportional Complexity Growth?

    7. Safety Limit Knowledge?

    8. Composability?

    9. Decomposability?

    10. Agreed Quality Measures

    11. Quantifiability?

    12. Knowledgeability?

If you are yourself assuming such things, then you should quickly come
to terms with software reality. Until your own basic assumptions change,
you're unlikely to change anyone else's.

3.4. Restructure Priorities

The software development priority list that has dominated the industry
from the beginning is:

    1. Develop it as fast as possible.

    2. Make it run as fast as possible

    3. Build it tight as possible.

    4. Put in as many features as you can.

    5. Do it at the lowest possible cost.

    6. Worry later. Bugs will get fixed.

These priorities don't even make my list. Here's an alternative set of
priorities more in keeping with what we can and can't do in software
development. While one might argue that these new priorities apply only
to life-critical software, I assert that they should take priority for
all software because together, they are only a more formal way of saying
"do we know what we are doing?" And we should be able to honestly say
that we know what we are doing before we concern ourselves with
development time, performance, size, and cost.

   1. Can it be analyzed? Is its behavior predictable?

   2. Can it be tested?

   3. Does it have a composable model?

   4. Does it work?

   5. Have feature, component, and data interactions been reduced to the
      absolute minimum?

   6. Does it have the features the users need (as contrasted to want)?

3.5. Public Honesty About Our Ignorance.

Speaking from the perspective of a member of the software industry, I
say that we are a profoundly dishonest bunch. We software types lie to
ourselves about what we can and can't do, we lie to our managers about
how long it will take to do it, they tell the marketeers that the
delivery schedule (which was more the product of martinis than rational
thought) they want is what they'll get. Among ourselves we may hint that
we don't know how to make something work, but we'll reassure the public
that it will work (somehow? somewhere? sometime?) If users have
unrealistic expectations then we are to blame. And therefore, it is up
to us to educate them so that their expectations are aligned with our
current abilities instead of our aspirations.


        A Comment on Boris Beizers Paper "Software is Different"
                  by Hans Schaefer (

(1) Is software really so different from traditional engineering products?

Maybe not. General engineering products, whether they contain software
or not, become more and more complicated and often suffer from design,
rather than manufacturing defects. You have highway projects running far
over budget, because the ground conditions have not been researched, or
because the drawings were wrong, or they cause accidents because of the
way they are designed. You have locomotives manufactured to the highest
standards, but failing because the cooling system was not designed
powerful enough.  There is so much of it.

I think we just have to admit that there are two kinds of QA: Widget
(manufacturing) QA and design QA. Software QA is design QA, for 99% of

(2) The fact that bug consequences in engineering products are near in
time to their cause. Yes, in many cases. But no in other ones. You have
car details designed in such a way as to allow rust to tear far faster
than if that detail were different in shape. But you find the
consequences only after, say, two years (instead of ten years for a good
design). When it comes to functionality failures, reliability issues,
maybe time locality is true in most cases. But there are lots of long
term effects with any product. What about side effects to our brains
from mobile phone use? IF THERE ARE ANY. Or cancer from eating food
grown near some industry causing pollution. There ARE bugs in
traditional products whose consequences first show up in the long run.
And in many cases it is either because of design flaws or because of
lack of research. Someone forgot to think of it at all.  Just like in
software requirements: the stuff we did not think of hurts us later.

(3) Synergistic bugs This is the real killer, because such bugs are so
widespread in all of our world.  People want easy solutions: "Why is
there this traffic jam every morning?"  Because many people have
interacted in making decisions that lead to it.  There are no easy
solutions, neither to software bugs, nor to bugs in any other complex

What I am implying is NOT that Dr. Beizer is wrong, but that Dr.
Beizer's thoughts should be applied even to traditional products.

    Hans Schaefer
    Software Test Consulting


         QWE2001: 5th Internet and Software Quality Week Europe

Mark your calendars now:  12-16 November 2001 in Brussels, Belgium,

Dozens of Tutorials, a complete Technical Conference devoted to Internet
and Software Quality issues, with multiple technical tracks, QuickStart
tracks, vendor exhibits and presentations, plus Belgian special events.

Complete details a <>.


                  A Return to Fundamentals in the 00's

                           by Danny R. Faught

I saw an emerging software development theme as I was sitting in an
auditorium in Dallas, attending the annual workshop of the Association
for Software Engineering Excellence
Two different presenters alluded to this idea, which to my holistic-
thinking mind constitutes a theme. :-)

First was the well-known curmudgeon Bob Glass, whose keynote was
entitled "New Software Concepts: Breakthrough or BS?" He made some very
good points about recent software fads such as 4GLs, maturity models,
and object-oriented development, demonstrating that the published
research results are mostly inconclusive and generally hint at only
incremental improvements. The silver lining that he presented was that
new techniques can give improvements that are worthwhile, even if
they're not true breakthroughs.

Bob indicated that the big theme of the 1980's was productivity
improvement, and in the 1990's, it was quality improvement. Now in the
naughties, he says we need to "Cut the BS, quit hoping for
breakthroughs, [and] start making the modest but valuable progress that
is achievable."

Later Mike Epner of Teraquest presented some of the data from a recent
Cutter Consortium survey. The top reasons cited for software project
problems were "Inadequate project management by the vendor," "Poorly
understood requirements," "Inadequate internal project management," and
"Lack of vendor domain experience." These are all classic problems that
we've known about collectively for a very long time, yet they're still
the most persistent problems that need attention. Mike said it's time
for a return to fundamentals.

I asked Bob whether he could predict what this decade's fad would be. He
said he didn't see any fads forthcoming, and then asked if I had any
ideas.  I said that perhaps it might be risk management. I recently gave
a talk on the subject of risk management where I discussed how risk
management can be integrated throughout all levels of an organization.
On the way to the talk I stopped by to talk to a kindred spirit, Stiles
Roberts, at Ciber. I asked for his perspective on this topic, and his
answer went something like "Good grief, let's get companies doing any
sort of risk management at all before we worry about integrating the
process across organizations." Bob's reply to me at the ASEE workshop
was similar, so even there we're back to fundamentals.

So instead of searching for silver bullets, let's be happy with a
handful of BB's. This is not to say we should ever be happy with the
status quo, but let's also not hang our heads when we "only" achieve
improvements of 25%. After all, consider Jerry Weinberg's sage advice in
The Secrets of Consulting, talking about how you can't make a client
admit that they have any problems:

      Never promise more than ten percent improvement. Most people
      can successfully absorb ten percent into their psychological
      category of "no problem." Anything more, however, would be
      embarrassing if the consultant succeeded.

Reprinted from the Tejas Software Consulting Newsletter.  Copyright
2001, Danny R. Faught.  Danny is an independent software quality
consultant who can be reached at <>


       The Vulnerabilities of Developing on the Net (Part 1 of 2)
                            Robert A. Martin
                         The MITRE Corporation

Disaster has struck. You would think that firewalls, combined with
filtering routers, password protection, encryption, and disciplined use
of access controls and file permissions would have been enough
protection. However, an overlooked flaw in the commercial web server
application allowed a hacker to use a buffer overflow attack to leverage
the application's privileges into administrator-level access to the
server. From there it was easy to gain access to other machines within
the Intranet and replace the public Web pages with details of the hack.
With the company's public site showing a live video stream of an ongoing
internal, private and sensitive company meeting, it left little room for
doubt as to how badly they had been hacked.

While most organizations have addressed the various aspects of
implementing cyber security, many are failing to successfully address
the one security area where someone can bypass all other efforts to
secure the enterprise. That area is finding and fixing known security
problems in the commercial software used to build the systems. There may
be an answer, however, that will transform this area from a liability
into a key asset in the fight to build and maintain secure systems. The
answer rests in an initiative to adopt a common naming practice for
describing the vulnerabilities, and the inclusion of those names within
security tools and services. The initiative has been in practice for
more than a year across a broad spectrum of the information security and
software products community: It is called the Common Vulnerabilities and
Exposures (CVE) initiative.

To Err Is Human

Every programmer knows they make mistakes when writing software, whether
it be a typo, a math error, incomplete logic, or incorrect use of a
function or command. Sometimes the mistake is even earlier in the
development process  reflecting an oversight in the requirements guiding
the design and coding of a particular function or capability of a
software program. When these mistakes have security implications, those
with a security bent will often refer to them as vulnerabilities and

All types of software, from large complex pieces to small and focused
ones, are likely to contain software mistakes with security
ramifications. Large complex software like operating systems, database
management systems, accounting systems, inventory management systems, as
well as smaller applications like macros, applets, wizards, and servlets
need to be evaluated for mistakes that can impact their security
integrity. Remember that when we put these various software products
together to provide an overall system, each of the software elements
that make up the system could be the one that compromises it.

Things were different in the past when an organization's computer
systems were stand-alone and only interacted with other systems within
the same organization. Only a few systems used tapes and file passing to
exchange information with outside systems. The same holds true for
government and military systems, including weapons. This isolation meant
that errors in commercial or developed software usually had limited
impact, at least from the public's point of view. In fact, most errors,
crashes, and oversights went unnoticed by the general public. At most,
these problems would cause occasional troubles for an organization's
closest business partners.

There Is No Hiding Now

The same is not true today. Very few of today's organizations, whether
in the private sector or government, have or build selfcontained
systems. It is the norm for employees, customers, business partners, and
the general public to have some degree of access and visibility into the
minute-by-minute health and performance of an organization's software
environment. Processing delay, calculation mistakes, system downtime,
even response time slowdowns are noticed and often draw criticism.

Accompanying this increased visibility is an explosion in the different
ways systems are accessed and used. Web and application servers have
been created to help make systems interconnect and leverage Internet-
based technologies. Access to web sites, purchase sites, online help
systems, and software delivery sites makes the organizations that own
the sites very visible. To better support business partners and
employees working at remote locations, on the road, or from home, we
have connected our backroom systems to the corporate Intranet and
extranet. New technologies have emerged, like instant messaging, mobile
code, and chat, whose functionality requires effortless access by users
across organizational boundaries. The movement to highly accessible
systems, driven by the need to save time and make businesses more
efficient, and the reality of having to do more with less, has
dramatically increased the impact of mistakes in commercial software.

While errors in self-developed software can still have a major impact on
an organization's ability to function, it is the vulnerabilities and
exposures in the commercial software they use to build systems that
creates the bigger problem. A mistake in a commercial program can open a
front or a back door into situations that most organizations strive to
avoid. A mistake permitting unauthorized access can expose private
information about customers and employees. It can allow hackers to
change information or perform services with your systems to their own
advantage. In addition, a vulnerability can allow them to shut down your
internal and publicly accessed systems, sometimes, without your
knowledge. In those cases where the vulnerability or exposure allows
someone to make changes or bring down systems, or when the theft of
services and information is eventually noticed, there can be a huge
impact to the organization's public image.  There can also be legal
liability and direct operational impact.

What Can You Do?

Determining the vulnerabilities and exposures embedded in commercial
software systems and networks is a critical "first step" to fixing the
problems. A simple patch, upgrade, or configuration change could be
sufficient to eliminate even the most serious vulnerability, if you know
what you need and how to get it.

To find information about vulnerabilities in commercial software that
your organization uses, you have to do some research and probably spend
some money. With commercial software, the customer has little or no
insight into the implementation details. At the very best you may have
an understanding of the general architecture and design philosophy of a
package. Companies offering commercial software treat the design details
and software code as business-critical private information. In addition,
since most of these companies are highly competitive, commercial
software vendors are sometimes reluctant to share their problems, even
with their customers.

Who Knows?

So how do you find out about commercial software vulnerabilities if the
vendors are not going to tell you? During the last decade, three groups
have emerged who share the same curiosity. For sake of discussion we
will refer to these as the hackers, the commercial interests, and the
philanthropists. The hackers, unfortunately, want to find
vulnerabilities and exposures so they can exploit them to gain access to

Those with commercial interests want to be hired to find the mistakes,
or they want you to buy their tools to help you find the vulnerabilities
and exposures yourself. They offer their services through consultants
who will evaluate your software systems, and through tools that you can
buy and run yourself. Some proffer the use of their tools as an
Internet-based service. This group includes software and network
security companies that provide security consulting services and
vulnerability assessments, databases of vulnerabilities and exposures,
and the tools for security services and vulnerability evaluations.

The philanthropists include security researchers in various government,
academic, and non-profit organizations, as well as unaffiliated
individuals that enjoy searching for these types of mistakes, usually
sharing their knowledge and tools freely.

Each group has members focused on sharing information: among like-minded
hackers, for a price in most cases for the commercial interests group,
and generally for free in the philanthropists group. For all three
groups the search for vulnerabilities and exposures in commercial
software is challenging since the commercial marketplace is constantly
developing and authoring new classes of software capabilities and new
ways of using them. This mushrooming of commercial software capabilities
also creates an ever-changing challenge for organizations using
commercial systems. The challenge is to correctly configure and
integrate the offerings of various vendors without opening additional
vulnerabilities and exposures from configuration and permission

How to Find Out

In response to the arduous task of tracking and reacting to new and
changing vulnerabilities and exposures, the members of these three
groups are using Web sites, news groups, software and database update
services, notification services like e-mail lists, and advisory
bulletins to keep their constituents informed and current.

So information on vulnerabilities in commercial software is available.
That is great, right? Well, not quite. There are several problems. The
biggest is that each organization (or individual) in these three groups
has been pursuing their vulnerability discovery and sharing efforts as
if they were the source of information on vulnerabilities. Each uses its
own approach for quantifying, naming, describing, and sharing
information about the vulnerabilities that they find. Additionally, as
new types of software products and networking are introduced, whole new
classes of vulnerabilities and exposures have been created that require
new ways of describing and categorizing them.

Another problem is that finding the vulnerabilities and exposures within
systems is just the first step. What we really want to do is to take the
list of vulnerabilities and get them fixed. This is the software
vendors' domain  those who create and maintain our commercial products.
Unless they use the same descriptions and names as the hackers,
commercial interests, and philanthropists groups, it is difficult,
confusing and frustrating to get the fix for any particular problem you

A Closer Look at Who Knows What

The Internet is the main conduit hackers use to share information on
vulnerabilities and how to exploit them. Different member organizations
in the commercial interests group have their own mechanisms for sharing
vulnerability information. For example, the tool vendors create
vulnerability scanners that are driven by their own vulnerability
databases. The intrusion detection system (IDS) vendors build different
types of software systems for monitoring your network and systems for
attacks. There are also scanner and IDS tools available from the
philanthropists group as freeware. Both the scanner and IDS providers
have to continuously update their tools with new information on what and
how to look for problems. Examples of these organizations and tools are
shown in Table 1.

Product           Tool Type          Organization
Centrax           scanner/IDS        CyberSafe
CyberCop          scanner            Network Associates
Dragon            IDS                Network Security Wizards
HackerShield      scanner            BindView Corporation
LANPATROL         IDS                Network Security Systems
Nessus            freeware scanner   Renaud Deraison & Jordan Hrycaj
NetProwler        IDS                AXENT Technologies
QualysGuard       ASP-based scanner  Qualys
RealSecure        IDS                Internet Security Systems
Retriever         scanner            Symantec Corporation
SAINT             scanner            World Wide Digital Security
SecureIDS         IDS                Cisco Systems
STAT              scanner            Harris Corporation
SWARM             scanner            Hiverworld, Inc.
               Table 1. Scanner and IDS Offering Examples

Scanners typically include tests that compare version information and
configuration settings of software with an internal list of
vulnerability data. They may also conduct their own scripted set of
probes and penetration attempts. IDS products typically look for
indications of actual attack activities, many can then be mapped to the
specific vulnerabilities that these attacks could exploit. The scanner
market recently developed a self-service-based capability. It uses
remotely hosted vulnerability scanners on the Internet that you can hire
to scan your Internet resident firewalls, routers, and hosts. The
results of the scans are provided through a secure link, and you can
usually run the scans whenever you want. These scans are shielded from
everyone but you, including the service provider. IDS capabilities are
often available as part of a managed security service, where the
organization contracts out the intrusion detection and monitoring to a
security services vendor.

Both IDS and scanner tool providers harvest information about
vulnerabilities and exposures from public information sites, hacker
sites, newsletters, and advisories. They also have their own
investigative researchers who continuously look for new vulnerability
information that will make their company's offering better than the
competition, as well as providing them with the "free" advertising that
comes with finding and publicly reporting new vulnerabilities and
exposures. Typically, these researchers serve as consultants within the
company, offering their services to evaluate an organization's systems
and networks. Their parent companies also offer databases of
vulnerabilities for a fee, although some also share the information
openly as raw information on a Web site.

Some members of the philanthropists group also offer very sophisticated
search and notification services for free, but their veracity, quality,
and levels of effort vary considerably. Examples of vulnerability-
sharing organizations are shown in Table 2.

Site Name            Type                Organization
arachNIDS            free IDS database   Max Vision Network
CERIAS Vulnerability Database  database  CERIAS/Purdue University
Fyodor's Playhouse   hacker web site     Insecure.Org
Online Vulnerability Database database
ICAT Metabase        free web site       NIST
Bugtraq mailing list list database
PacketStorm          hacker web site     Securify, Inc.
SWAT Database        database            AXENT Technologies
Vigil@nce AQL        database            Alliance Qualite Logiciel
X-Force Database     free web site       Internet Security Systems
                Table 2. Vulnerability Sharing Examples

In addition to freeware scanner, IDS tools, and vulnerability databases,
the philanthropists groups government and academic members offer several
announcement, alert, and advisory services that are widely used and
highly valued. Some commercial interests group companies offer these
types of free services as well. Examples are shown in Table 3.

Service                    Type         Organization
Bugtraq                    e-mail list  Bugtraq
Casandra                   alerts       CERIAS/Purdue University
CERT Advisories            advisory     CERT Coordination Center
CyberNotes                 newsletter   NIPC
Razor                      advisory     Bindview Corporation
S.A.F.E.R.                 newsletter   The Relay Group
SANS NewsBites             e-mail list  SANS Institute
Security Alert Consensus   e-mail list  Network Computing and SANS
SecurityFocus Newsletter   newsletter
SWAT Alerts                alerts       AXENT Technologies
X-Force Alert              advisory     Internet Security Systems
             Table 3. Alert and Advisory Services Examples

There are numerous venues for finding out what vulnerabilities and
exposures exist in your organization's commercial software systems, as
well as many tools and service providers willing to help you determine
which vulnerabilities and exposures you have.

The three groups we've covered -- hackers, commercial interests, and
philanthropists -- all address locating the vulnerabilities and
exposures in the commercial software that forms the base of your live
systems and networks.

We will now address finding the "fixes." The product vendors who make
the software in which these vulnerabilities were found provide the
solutions for vulnerabilities. Many of them have their own methods of
providing their customers with software fixes and updates. Until
recently, most vendors were not very proactive in distributing patches
and updates outside of their normal software development cycle. This has
improved considerably. Now, many major vendors provide alerts and
advisories concerning security problems, fixes, and updates (See Table

Service                           Type       Organization
IBM ERS                           advisory   IBM
Microsoft Product Security Notif. advisory   Microsoft Corporation
SGI Security Advisory             advisory   Silicon Graphics, Inc.
Sun-alert                         alert      Sun Microsystems, Inc.
          Table 4. Vendor Alert and Advisory Services Examples

But can these various vulnerability services, tools, and databases,
along with the software vendor's update announcements effectively
combine to help you assess, manage, and fix your vulnerabilities and
exposures? The short answer is that it used to be very difficult, but
now a way to do it seems to be at hand. So what was wrong, and what

                           (To Be Continued)


                  AQuIS 2002: Conference Announcement

  The 5th International Conference on "Achieving Quality In Software"
                       Venezia, March 11-13, 2002



Good or evil though the software may be, a theorem holds: Quality in
life strictly depends on quality in software.

The "Achieving Quality In Software" Conference Series couldn't open the
millennium with a less ambitious goal. After ten years efforts in
putting together the diverging interests of academia and industry, which
still stay apart, attention is also focused on the principal characters
on stage, us, the final users.

New, surprising features will be deployed at the Conference. Traditional
technical topics will be discussed close to novel, yet essential aspects
for quality to be achieved, touching both ends of human involvement in
quality, from engineers (so many are needed!) to clients (they are
counted by the billions!). And the message to be spread out of the
Conference room and reach all of them is:

"Better software technology is a necessary ingredient for better life".

So, let the practitioners learn and act for this to be achieved.

Together with worldwide-known Experts invited to show and debate their
best solutions to reach these goals, Authors are solicited to submit
papers explaining the way to improve everybody's lifestyle through
advances in:

Technical areas:

      Software engineering best practices
      Software testing
      Software processes
      Quality models
      Formal methods
      Internet software
      Standards and certification in software

Non technical areas:

      Human behavior and productivity
      Impact of software quality into human lifestyle
      Raising awareness in end-users
      Raising awareness in decision making environments

PROGRAM COMMITTEE (provisional):  Antonia Bertolino, Italy; Debra
Richardson, USA; Eduardo Miranda, Canada; Edward Miller, USA; Emilia
Peciola, Italy; Giovanni Vigna, USA; Istvan Forgacs, Hungary; Mario R.
Barbacci, USA; Mehdi Jazayeri, Austria; Rob Hierons, UK; Terry Rout,
Australia; Tor Stalhane, Norway; Clanio Figueiredo Salviano, Brazil.


        September 14, 2001: Papers due
        November 26, 2001: Notification of acceptance
        December 21, 2001: Final papers

The conference is co-located with the Second International SPICE
Conference, Venezia, March 14-15


      Mario Fusani
      I.E.I. CNR
      Area della Ricerca di Pisa-San Cataldo
      56100 Pisa, Italy
      Voice: +39 050 315 2916
      Fax: +39 050 315 2810

      Fabrizio Fabbrini
      I.E.I. CNR
      Area della Ricerca di Pisa-San Cataldo
      56100 Pisa, Italy
      Voice: +39 050 315 2915
      Fax: +39 050 315 2810


               Software Process Improvement in the Small

          Robert P. Ward, Mohamed E. Fayad, and Mauri Laitinen

There are two battles over process that every small software company
must win to be successful.

The first is the battle to convince the company to adopt reasonable
development processes at all.  Discussion of what makes up a good
process may be an interesting meditation, but is entirely moot until the
company commits to a policy of process improvement.  The second battle
is never over.  It is to change existing processes to match changing

Within the arena of professional opinion, if not actual practice, the
first battle may be all but over.  Just a few years ago, the term
programs and human resources departments.  Now it stands of the verge of
internet startup buzzwordhood.

But within any particular company the energy barrier between good
intentions and actual implementation can be very high indeed.   This is
often a matter of high expectations that can never be met under typical
real conditions.  Engineers often tend to idealize what it means to "do
it right," a statement of intention usually meant to include such
efforts as up-to-date documentation, properly done architecture and
design phases and rigorous quality assurance efforts.  When development
projects do not manage to produce these things, as most new and small
projects do not, the risk is an unintentional abandonment of process

We suggest a fresh start by considering that the processes by which
software is developed are likely to change with circumstance - perhaps
even change dramatically - even while general principles like the need
for good communication remain constant.  For example, the most effective
form of code review for a group of three or four developers at the
beginning of a product cycle might well be a spontaneous discussion in
front of a white board.  An appropriate process  might be as simple as
agreeing on the optimum frequency of such discussions; ensuring that no
one's code is excluded from the discussion; and designating someone to
ensure that the discussions actually take place.

We also suggest that the consideration of appropriate process be goal-
and risk-driven.  A group rushing to develop a prototype in order to
obtain angel funding will have a quite different set of goals and risks
from the group engineering version three of an existing product.
Understanding goals and risks goes a long way towards suggesting good
processes that are, after all, just agreements on how best to make
difficult work prone to success.

Note that process improvement for a large company aims to improve
efficiency and reliability while reducing costs.  Some companies report
as much as a 7 to 1 reduction in cost.  While startlingly high, these
numbers are irrelevant to small organizations.  A small, growing company
would most likely not have the accrued overhead to see such efficiency
improvements, and it would be far more interested in controlling the
chaos inherent in growth.

Discussion of the best processes to adopt is likely to bring up the
choice of  pret  porte vs. tailor-made.  Any number of well-defined
processes are available for adoption - Extreme Programming, RUP, SCRUM,
among others.  In addition, there are meta-processes, such as the
Capability Maturity Model (CMM), the Personal Software Process (PSP),
and the Team Software Process (TSP).  Clearly, these processes and
meta-processes operate at different levels with different degrees of
formality.  Engineering groups have the luxury of selecting one or of
developing some variant of an accepted model.  The discussion to avoid
is the one in which formal processes are compared out of the context of
the particular group's particular circumstances.  We believe that
abstract side-by-side comparisons of process models are essentially

An often-told tale has a sitar player asking the Buddha how best to tune
his instrument.  Leaving aside the question of why a musician would ask
musical advice of a monk, the famous answer: "not too tight, and not too
loose" is a good general rule.  Recently, a company of about 100
employees that makes business-to-business electronic commerce
infrastructure software hired a new president and COO.  His most visible
decision early on was to take the company from its creaky ad hoc
processes to a company-wide process for determining what subsequent
releases should contain and what the schedule should be, among other
decisions.  He was careful to emphasize a limited number of
fundamentals, such as making sure that various departments shared their
plans - marketing, professional services, engineering, etc. - with each
other and ensuring that everyone understood what the company was trying
to accomplish with upcoming releases.

The actual processes that were adopted were neither novel nor
particularly inventive.  Rather it had the virtues of being easy to
explain and relatively easy to comply with, with goals easily
describable as having been met or not.  Subsequent releases under the
new regime were successful.

The problem with successful processes are that they, by definition,
shift the ground beneath a company's feet.  Success might result in an
expanding staff, diversifying product line, new international sales,
more aggressive release schedules, almost anything.  Whatever the
specific change, the processes that helped create the change in the
first place are likely to require overhaul as a result.  While this is
not a surprising dialectic it can be difficult to manage and to some
extent flies in the face of that conventional wisdom that looks to
identify a "best" process model.  Thus change management is an essential
component of process improvement.

Can a company manage growth-induced change in its fundamental product
development processes while simultaneously maintaining enough continuity
to keep the process at least minimally predictable and allow employees
to plan ahead?  We believe it is possible, given that certain
fundamentals are observed.

o A process is a tool rather than an end in itself.  No process makes
  personnel interchangeable or substitutes for talent.  Nor can a
  process, by itself, transform an indifferent organization into an
  effective one.  Improvement comes from management assuring that tools
  are properly applied.
o Processes must be simple.  As the example above illustrated, the new
  company president implemented a limited number of fundamental
  processes.  Complex processes are hard to follow, hard to update, and
  quickly become unsuitable for the operations they originally
o Processes must be robust.  Robustness in this context means a process
  must be easy to apply and must be difficult to get wrong without
  warning indications.  It also means that as an organization changes,
  its processes should likewise be amenable to change.  Obviously,
  simpler processes tend to be more robust.

We should emphasize that these guidelines do not argue against adopting
best practices, but rather they argue against the prescription of
processes and process models without close attention to context.

The claim that the Personal Software Process and Team Software Process
are models designed for small organizations is an example of a process
improvement prescription applied in the wrong context.  The PSP is a
method for individual improvement, while the TSP, which requires all
team members to be PSP proficient, specifies roles and approaches to
development.  While these process models show some promising results,
they have been implemented for the most part in larger organizations
with substantial resources.  Given that PSP demands between 100 and 200
hours of initial formal training and an extremely high burden of data
collection and analysis, it is unlikely that small companies with
limited time and financial resources would risk starting their process
improvement program with the PSP.

In summary, processes "done right" for small companies will probably be
less than ideal from the viewpoint of both critics and implementers.
The exigencies of limited resources and personnel will outweigh the
interests of installing full-fledged formal processes at the outset.
While the cost of quality may be free in the long run, a small company,
especially a startup, cannot ignore its up-front costs.  Common sense
dictates that processes be established that match the needs of the
organization and grow with it rather than being imposed by some abstract

Robert P. Ward ( is a consultant in San
Mateo, CA.

Mohamed E. Fayad ( is J.D. Edwards professor at
University of Nebraska, Lincoln.

Mauri Laitinen ( is a principal in Laitinen Consulting at
Lake Tahoe, CA.


                     A Comment on O'Neill's Review
                     by Don Mills (

In the May issue, Don O'Neill made some adverse comments regarding Fred
Brooks' classic, "The Mythical Man-Month".

I have no quarrel with what Don wrote as such, but -- not having read
Brooks' work myself -- I was struck by one thing in the critique: its
seeming emphasis on the quality (or otherwise) of program code and of
system design.  "Real programmers thrive on the pursuit of perfection,"
he writes, but so do "real" managers, "real" analysts, "real" designers,
and "real" testers.  Meanwhile, according to successive reports over the
past six years from the Standish Group, Collins & Bicknell, Construx
Software, and others, between 75% and 85% of all software projects are
outright or technical failures, and the proportion is rising rather than
falling.  Evidently, "real" people are in even shorter supply than IT
professionals in general.

Despite Don's emphasis on coding and design (apparently reflecting
Brooks' biases) I find as a jobbing tester that the majority of bugs I
encounter, and (mostly) the worst bugs, lie in the poor quality of
original problem specifications.  And although I must agree with Don's
comments regarding "bad estimation and uncontrolled change", I must also
observe that (again, in my experience) programmers tend to suffer
terribly from the "rush-to-coding" syndrome.  If "quality" means (as I
believe) "cost-effective fitness for purpose", the besetting sin of many
software development projects is failure to find out enough about "the
purpose" before starting out on design and construction.

Consider the very word, "specification".  BA's and RE's blithely write
specifications with little thought to what the word means.  Here's a
useful definition:

"specification.  (1) The activity or process of rendering something (a
situation, statement, etc.) specific.  (2) A document containing
specific statements as the basis for conducting some piece of work."

The adjective, "specific", is embedded in the word.  Here's another
useful definition:

"specific.  (Of a situation, requirement, statement, etc.)  Clear,
complete, consistent, detailed, and unambiguous; not open to assumptions
or multiple interpretations with regard to meaning or use."

From my work with "specifications" written by many Australian and New
Zealand BA's and RE's, it is clear to me that this fundamental concept
has been lost sight of.  To an overwhelming degree, "specifications" are
unclear, incomplete, inconsistent, lacking in detail, and riddled with
ambiguities: in a word, non-specific.  Rather than business rules, they
present what the Business Rules Group calls, "business ramblings".  And
all too often, they confuse levels of abstraction, and present software
solution designs as part of the business problem description.  To coin
another adjective out of the middle of "specification", most so-called
specifications are "ific" rather than "specific".

It's all very well for Don to write that "software is a process of
experimentation".  This may be true, although the concept of applying
perpetual cutting-edge R&D applied to what is nowadays the major asset
of many organizations sends shivers up my spine.  But in any case, for
most organizations, "software" is still part of the "how" of their
business, not the "what", whereas it's precisely the "what" that's
missing or obscured in so much business software that comes before me
for testing.  The "how" is enormously dynamic, nowadays,  but the "what"
remains fundamentally unchanged.  Yet what we find again and again is
that, lacking *specific* information about business policies and
business rules, programmers are left to make up details of how a
business is going to run ...

I agree with most of what Don wrote, it's what he didn't write that
upsets me, since I experience its pointy end week after week after week.
I assume he felt constrained in some degree by the topics Brooks himself
touched on; but right there, surely, is where a critique should have
waved its arms and shouted, "Hoy!  You're missing something here!"


           Open-Source Security Testing Methodology Manual,
                 by Pete Herzog (

Read it: <>

"This manual is to set forth a standard for Internet security testing.
Disregarding the credentials of many a security tester and focusing on
the how, I present a solution to a problem which exists currently.
Regardless of firm size, finance capital, and vendor backing, any
network or security expert who meets the outline requirements in this
manual is said to have completed a successful security snapshot. Not to
say one cannot perform a test faster, more in depth, or of a different
flavor. No, the tester following the methodology herein is said to have
followed the standard model and therefore if nothing else, has been

I say security snapshot above because I believe an Internet security
test is no more than a view of a system at a single moment in time.  At
that time, the known vulnerabilities, the known weaknesses, the known
system configurations has not changed within that minute and therefore
is said to be a snapshot. But is this snapshot enough?

The methodology proposed herein will provide more than a snapshot if
followed correctly with no short-cuts and except for known
vulnerabilities in an operating system or application, the snapshot will
be a scattershot-- encompassing perhaps a few weeks rather than a moment
in time.

I have asked myself often if it is worth having a central standard for
security testing. As I began to write down the exact sequence of my
testing to share synchronously the active work of a penetration test, it
became clear that what I was doing is not that unique. All security
testers follow one methodology or another. But are all methodologies

All security information I found on the Internet regarding a methodology
was either bland or secret. "We use a unique, in-house developed
methodology and scanning tools=85." This was a phrase found often. I
remember once giving the advice to a CIO that if a security tester tells
you his tools include ISS, Cybercop, and "proprietary, in-house
developed tools" you can be sure he mainly uses ISS and Cybercop. That's
not to say many don't have proprietary tools. I worked for IBM as an
ethical hacker. They had the Network Security Auditor (NSA) which they
now include in their firewall package. It was a good, proprietary tool
with some nice reporting functions. Was it better than ISS or Cybercop?
I couldn't say since we also used ISS to revalidate the NSA tests. This
is due to the difficulty of keeping a vulnerability scanner up-to-date.

I feel it is valid to be able to ask companies if they meet a certain
standard. I would be thrilled if they went above the standard. I would
also know that the standard is what they charge a certain price for and
that I am not just getting a port scan to 10,000 ports and a check of
4,800 vulnerabilities. Especially since most of which only apply to a
certain OS or application. I'd like to see vulnerability scanners break
down that number by OS and application. I know if I go into Bugtraq (the
only true vulnerability checking is research on BT) that I will be able
to find all the known vulnerabilities by OS and application. If the
scanner checks for 50 Redhat holes in a certain flavor and 5 Microsoft
NT holes and I'm an NT shop; I think I may try a different scanner.

So following an open-source, standardized methodology that anyone and
everyone can open and dissect and add to and complain about is the most
valuable contribution we can make to Internet security. And if you need
to know why you should recognize it and admit it exists whether or not
you follow it to the letter is because you, your colleagues, and your
fellow professionals have helped design it and write it. Supporting an
open-source methodology is not a problem of making you equal with all
the other security testers-- it's matter of showing you are just as good
as all the other security testers. The rest is about firm size, finance
capital, and vendor backing."

      ------------>>> QTN ARTICLE SUBMITTAL POLICY <<<------------

QTN is E-mailed around the middle of each month to over 9000 subscribers
worldwide.  To have your event listed in an upcoming issue E-mail a
complete description and full details of your Call for Papers or Call
for Participation to .

QTN's submittal policy is:

o Submission deadlines indicated in "Calls for Papers" should provide at
  least a 1-month lead time from the QTN issue date.  For example,
  submission deadlines for "Calls for Papers" in the March issue of QTN
  On-Line should be for April and beyond.
o Length of submitted non-calendar items should not exceed 350 lines
  (about four pages).  Longer articles are OK but may be serialized.
o Length of submitted calendar items should not exceed 60 lines.
o Publication of submitted items is determined by Software Research,
  Inc., and may be edited for style and content as necessary.

DISCLAIMER:  Articles and items appearing in QTN represent the opinions
of their authors or submitters; QTN disclaims any responsibility for
their content.

STW/Regression, STW/Coverage, STW/Advisor, TCAT, and the SR logo are
trademarks or registered trademarks of Software Research, Inc. All other
systems are either trademarks or registered trademarks of their
respective companies.

          -------->>> QTN SUBSCRIPTION INFORMATION <<<--------

To SUBSCRIBE to QTN, to UNSUBSCRIBE a current subscription, to CHANGE an
address (an UNSUBSCRIBE and a SUBSCRIBE combined) please use the
convenient Subscribe/Unsubscribe facility at:


As a backup you may send Email direct to  as follows:

   TO SUBSCRIBE: Include this phrase in the body of your message:

   TO UNSUBSCRIBE: Include this phrase in the body of your message:

Please, when using either method to subscribe or unsubscribe, type the
 exactly and completely.  Requests to unsubscribe that do
not match an email address on the subscriber list are ignored.

		Software Research, Inc.
		1663 Mission Street, Suite 400
		San Francisco, CA  94103  USA

		Phone:     +1 (415) 861-2800
		Toll Free: +1 (800) 942-SOFT (USA Only)
		Fax:       +1 (415) 861-9801
		Web:       <>