sss ssss      rrrrrrrrrrr
                      ssss    ss       rrrr   rrrr
                     sssss     s       rrrr    rrrr
                     ssssss            rrrr    rrrr
                      ssssssss         rrrr   rrrr
                          ssssss       rrrrrrrrr
                    s      ssssss      rrrr  rrrr
                    ss      sssss      rrrr   rrrr
                    sss    sssss       rrrr    rrrr
                    s  sssssss        rrrrr     rrrrr
         +=======    Quality Techniques Newsletter    =======+
         +=======             March 2003              =======+

Subscribers worldwide to support the Software Research, Inc. (SR),
TestWorks, QualityLabs, and eValid user communities and other
interested parties to provide information of general use to the
worldwide internet and software quality and testing community.

Permission to copy and/or re-distribute is granted, and secondary
circulation is encouraged by recipients of QTN provided that the
entire document/file is kept intact and this complete copyright
notice appears with it in all copies.  Information on how to
subscribe or unsubscribe is at the end of this issue.  (c) Copyright
2003 by Software Research, Inc.


                       Contents of This Issue

   o  eValid Used in E-Commerce Methods and Metrics Course

   o  Educational Opportunities with Amibug

   o  Second NJITES Symposium on Cybersecurity and Trustworthy

   o  Good Enough Software, by Boris Beizer, Ph. D.

   o  Controlling Softare Costs, by William Roetzheim

   o  eValid Updates and Specials

   o  Testing and Certification of Trustworthy Systems

   o  QASIC 2003: Third International Conference on Quality Software

   o  UNU/IIST: International Institute for Software Technology

   o  QTN Article Submittal, Subscription Information


       eValid Used in E-Commerce Methods and Metrics Course


eValid has been chosen as an example website test system for use in
Prof. Neal Hirsh's graduate level course on e-Business technology
and metrics at Johns Hopkins University: Electronic Commerce Methods
and Metrics.

                         Course Description

E-commerce is a general term for doing business on the Worldwide
Web. To a great extent, success or failure of the business depends
on the tradeoff of cost, revenue, and service per type of customer.
In this course, tradeoffs are taught by utilizing data from web
access logs, measurements of website performance, server benchmarks,
product literature, and demographics.

The information is consolidated through quantitative techniques such
as linear programming. Students summarize the tradeoffs in written
reports, which are subsequently presented and discussed in class.
During the course, students also disassemble a contemporary e-
commerce server to acquire a closer understanding of e-commerce

Contact: Dr. Neale Hirsh, Adjunct Professor, E-Commerce, Johns
Hopkins University, Baltimore, Maryland.  Email:


               Educational Opportunities with Amibug


For over two years AmiBug.Com has developed a series of popular
courses designed to teach many aspects of software testing to
testing practitioners and other software development professionals.
To teach web based test automation we allow each student to operate
a version of eValid with a series of example scripts used to
demonstrate automated web application functional testing,
performance measurement and load generation. Scripting concepts and
data driven test automation are taught with practical eValid
examples.  eValid allows the students to quickly grasp important
test automation concepts without having a steep tool-oriented
learning curve.

                        Course Descriptions

Courses are offered directly by AmiBug.Com or through various
business partners.

Check out the current offerings by Amibug at the Internet Institute
in Ottawa, Canada. All courses use eValid for test automation, load
testing, and performance testing demonstrations and in-class

Testing Web Applications <>

Software Testing Methods and Tools <=>

Practical Hands-On Testing <>

                           About AmiBug

AmiBug also offers two on-site workshops which feature eValid:

Web Application Testing with eValid (S201)

Web Performance, Stress and Load Testing with eValid (S202)

                        Product Evaluation

eValid is the only tool set Amibug has found that covers the breadth
of testing types without overwhelming students with tool dependent
features and facilities! eValid wins hands down over all its

Contact: Robert Sabourin .


                     Second NJITES Symposium on
               Cybersecurity and Trustworthy Software
                       Monday, April 28, 2003
                  Stevens Institute of Technology
                      Hoboken, New Jersey, USA

        Symposium web site:

       E-mail inquiries:

This symposium brings together researchers and practitioners, in
government, academia and industry, to discuss problems and possible
solutions in cyber security, both for e-commerce and for homeland
security. A particular emphasis of the symposium is to bring
together those interested in communications security and in end-to-
end security.

   8:30-9:15 Registration and breakfast

   9:15 Opening remarks

   9:30-10:30 Keynote talk: Computer Security.
          Ed Felten, Princeton University.

   10:30-11:00 Coffee break.

   11:00-11:30 Cryptology and non-computer security.
          Matt Blaze, AT&T Labs-Research.

   11:30-12:00 Privacy-protecting statistics computation:
          theory and practice.
          Rebecca Wright, Stevens Institute of Technology.

   12:00-12:30 Flexible Regulation of Distributed Coalitions.
          Naftaly Minsky, Rutgers University.

   12:30-2:00 Lunch.

   2:00-3:00 Keynote talk: Toward fixing the "compliance defects"
          of public key cryptography.
          Michael Reiter, Carnegie Mellon University.

   3:00-3:30 Coffee break.

   3:30-4:00 Dependent session types for safety
          in distributed communications.
          Adriana Compagnoni, Stevens Institute of Technology.

   4:00-4:30 Improving security with distributed cryptography.
          Tal Rabin, IBM Hawthorne Research.

   4:30-5:00 Type-Based Distributed Access Control.
          Tom Chothia, Stevens Institute of Technology.

   5:00 Concluding remarks.


                       Good Enough Software
                        Boris Beizer, Ph.D.

      Note:  This article is taken from a collection of Dr.
      Boris Beizer's essays "Software Quality Reflections" and
      is reprinted with permission of the author.  We plan to
      include additional items from this collection in future

      Copies of "Software Quality Reflections," "Software
      Testing Techniques (2nd Edition)," and "Software System
      Testing and Quality Assurance," can be obtained directly
      from the author at .

Software development has always been done to a "good enough"
standard.  At least it has been so in my 40 years of observation.
Some things, however, have changed over the past decade.

1.  We are more candid about it, less guilt-ridden over bugs, and
generally more realistic.  The change here is that we are playing
down the notion of "perfect" or "bug-free" software.  Only the
lawyers seemed not to have caught on to this and their "don't sue
us" stance continues to be a major barrier to good software.

2.  The users (consumers, corporate executives who are being hosed
by their IT departments) are far less tolerant of bugs than they had
been in the past.  This savvy is increasing to both the anguish of
marginal software developers and to the public's benefit.

3. Consumerized software.  Shrink-wrapped software is the prime
example.  When software becomes a consumer product, expect consumer
activism and class-action lawsuits.

4. Industry consolidation continues at a furious pace.  The
surviving players are better, smarter, and produce better software.
They are also bigger and therefore can make the capital investment
in tools and especially training, that all quality improvements
require.  Consolidation is not just Microsoft buying up everything
in sight, but also  increased outsourcing, increased use of
commercial packages, increased divestiture of IT/DP departments to
contractors, etc.  Consolidation within the software industry is
also occurring as a direct consequence of consolidation in other
industries (e.g., banking). The days of three guys at the kitchen
table turning into a Microsoft are long past.  If there are three
guys there (and there are) they succeed by carving out a small but
profitable niche and making deals with Microsoft (say) to
incorporate their goodie into the big package. The nature, the pace,
the progress, and the consequences of this consolidation is
virtually identical to the analogous consolidation that took place
in the first two decades of the 20th century in the Automobile

5. You might argue that consolidation led to rotten cars from GM.
Yes and no.  Had the consolidation not taken place, what cars there
were would be priced out of the reach of almost everybody. One
cannot claim that the model T was even remotely comparable to a
hand-built Buggati -- but then how many Buggati's were built and how
many model T's?  The three principal US auto manufacturers failed in
a different way in the post WWII era:

a. Ford kept believing that they could dictate customer preferences
and that their customer loyalty was un-assailable.

b. GM kept believing that buyers wanted "style" above all other

c. Chrysler kept on believing in their own engineering superiority
myths decades after that myth was no longer true.

The penetration of the American auto market by Japan can be
attributed to Japan's ability to perceive that the world had changed
and that quality had become an important ingredient in market
penetration  but not quality for the sake of quality, but quality
that is "just good enough."  If technology and economics permitted
it, and suddenly quality was no longer an issue, but say tail fins,
electroluminescent colors, and scent amplifiers, you can bet that
next year's Japanese product would have huge fins, radiate across
the entire visible spectrum, and have the appropriate stinks.

6.  "Good Enough" is a dangerous idea.  It is a dangerous idea but
we must be reconciled to live with that danger and allow the market
to mitigate the danger for us.  It is a dangerous idea because it
seems to give license for the continued production of junky software
-- after all, what developer, no matter how inept and no matter how
superficial their testing, doesn't believe that the quality they
produced is "good enough" at the time they released it?  This
"license to kill"  however, is only operative if the licensee adopts
"good enough quality" as a mere slogan rather than as a fundamental
and integral part of their process.  Here are some other dangerous
notions -- when they are insubstantial slogans.

a.  Quality is our most important product.

b.  Zero-defect goal

c.  Six sigma software

d.  Risk managed quality

e.  100% branch cover

I could go on, but I think that gets the gist across.  Every slogan
is dangerous because there are always poor benighted souls who will
opt for the slogans and not the substance that gives truth to the
slogan.  But why should that bother us?  There isn't a construct in
our programming language that isn't dangerous in some context or
another.  There isn't an operating system call that can't be
likewise abused. What is wrong here are some hyper-academics whose
only notions of software development comes from the observations of
immature and poorly trained student programmers -- the world must be
made safe for their likes.  There isn't an idea in technology that
can't (and won't) be abused.  So the putative dangers of "just good
enough quality" is simply not germane.

7.  Formalities and all of that.  Let's talk about "formalities"
such as various coverage criteria, failure rates, and all the rest.
First the coverage criteria.

a. Coverage criteria are objective measures of what has and hasn't
been tested.  From the earliest literature on the subject to the
present, the leading thinkers and (knowledgeable) expositors have
stated clearly and reiterated ad-nauseum that coverage criteria are
necessary but never sufficient. Somehow, the critics of the use of
proper coverage standards keep on saying that we believe these to be
sufficient testing criteria.  You must, absolutely must, at least
once in the software's lifetime (typically in unit testing) test
every statement and every branch -- more comprehensively, you must
assure that every executable object instruction is exercised at
least once (that takes care of most of the coverage criteria) and
also assure that every data relation is tested at least once (that
takes care of the rest). Why?  Because if you do not you are
guaranteed to miss bugs in what you did not test if such bugs exist.

b. Coverage criteria (when measured by coverage tools) are
objective.  They are one of the few objective methods we have in
testing. Inspections, for example, while cost-effective, are not
objective because they are conducted by humans over the code they
think they see rather than the code that is. Does the use of gauges
to measure whether manufactured parts are in or out of tolerance
mean that cars will be good?  Of course not.  Junky, useless, and
unreliable cars can be built out of parts built to micron
tolerances.  Does the use of coverage criteria mean that the
software will be good?  Of course not; because you can test to
arbitrarily stringent coverage criteria and still not have tested
anything of importance.  But just because part tolerances are not
sufficient to guarantee good cars doesn't mean we throw out part
inspections -- and just because coverage criteria are insufficient
to guarantee good software, or even good testing, doesn't mean we
throw out testing to coverage standards.  And please, let's stop
dragging out that tired straw-man criticism of coverage criteria
that they aren't sufficient.  No right thinking person ever said
that they were.

c.  Coverage criteria establish a testing floor below which rational
developers and testers will not sink.  They have never been touted
as a goal to which testers should aspire (except by people who have
only a superficial or hopelessly outmoded understanding of testing).

8.  Now for statistical models such as software reliability models.
All of these models are attempts to quantify what we mean by "good
enough."  There is a huge literature on software reliability models
and some of it may even apply to software.  In those applications
for which these models have proven worth, such as telecommunications
and control systems, not to use them borders on idiotic.
Unfortunately, in many cases, the various failure models can't be
used because of unstable usage profiles or the fundamental
impossibility of getting usage profiles in the first place.  But
over the past ten years, there have emerged many other quantitative
statistical models that approach the notion of "good enough."
Noteworthy among these are Voas' testability, Hamlet's probable
correctness,  and some of Howden's stuff -- among many others.
These alternative notions all get at trying not to predict failure
rates or the expected number of remaining bugs, but are based on
some quantitative notion of statistical confidence -- which is to
say, "have we tested enough?" or alternatively, "is this software
good enough?"   Here I must put in a plug for our hard working
research community.

a.  Practitioners complain that these models and notions are
academic, based on toys, and executed over software built by novices
(students)but they won't give the notions a trial on their real
software, written by real programmers.

b.  If they do try it and it succeeds, they won't publish the
results in order to retain a competitive advantage.

c.  If they try it and it fails, they won't publish the results in
order to avoid embarrassment.

d.  It doesn't matter anyhow, because any public disclosure based on
a non-zero bug probability will be squelched by the corporate
lawyers who are afraid of product liability/merchantability

9.  What do we really need?  Quality control for manufactured
products all have the notion of "good enough" built in.  The cost of
increased quality control (e.g., more testing and inspections) is
weighed against the possible exposure and rework due to errors, cost
of warranty service, etc.  The quality level is established at the
point of maximum total profit -- to the extent that this can be

In other words, it is a risk-management operation from start to
finish.  And it is quantitative as hell.  We can accept this as an
aspiration -- as a goal -- but should not fall into the trap of
willy-nilly applying the methods that have worked for manufacturing
to software (many bitter lessons already learned there).  What do we

a. We need more research.

b. We need more practitioners willing to try it out and to provide
working laboratories for promising methods.

c. We need less interference from lawyers and less imposition of
distortive legal issues into the engineering process.

d. We must apply all the proven, quantitative, objective measures we
have (e.g., coverage) until we have amassed enough data to be able
to say what can and cannot be safely lightened up -- what are the
necessary tolerances to which we should work.  We need to get far
more quantitative, not less.

10. But most of the above is somewhat pointless. Because in my
experience, based on my subjective (and/or quantitative, where
possible) evaluation of what constitutes proper risk (i.e., what is
"good enough") I have only seen a handful of organizations that
produced better quality than sanity dictated. Only two or three
times have I said "you're testing too much."  The question is
pointless because most of the debaters have yet to reach the point
where they should be seriously thinking in terms of optimizing their
process by reducing the testing effort. Remember that an optimized
process is a CMM level 5 process.


              Controlling Software Costs Introduction
                       Mr. William Roetzheim

For most organizations, software development is a necessary evil.
Just about every manager has at least one horror story involving a
software development project gone awry.  Unfortunately, discussions
about software quickly degenerate into technical jargon (and in many
cases, technical nonsense) that is difficult to understand and even
more difficult to use as the basis for meaningful executive
decisions and strategy.

This white paper attempts to address these issues in a non-technical
and meaningful manner.  We focus on:

o The true organizational costs of software development;

o The critical success factors that drive these costs;

o The tools and techniques that can help to manage and control those

Organizational Costs of Software

If you look at your corporate income statement, you'll probably find
that somewhere between 2% and 7% of your revenue is spent on
information technology.  You may be aware that most of that is
absorbed by the information technology infrastructure (computers,
networks, network administration people, and so on). The money
you're spending on custom software development is probably under one
percent of your total revenue.  At this point in the analysis, most
executives move on to other areas where changes are likely to result
in a more significant improvement to the bottom line.

This is a flaw in the accounting standards, resulting in often bad
management decisions.

The problem is that the income statement only looks at the direct
costs of software development, ignoring the far more significant
opportunity costs and indirect costs.  The fact is, for many
businesses the execution of the strategic vision is dependent on
software. Fedex could not deliver packages overnight, Southwest
Airlines could not turn planes around in 20 minutes, and Pfizer
could not get their latest drug to market without software.
Software enables and defines the organization's business processes,
and so in a very real sense, defines the organization itself.
Delays or failures in software projects often have opportunity costs
in the form of lost market share, delayed new revenue streams, and
prolonged organizational inefficiencies that are many orders of
magnitude larger than the cost of the software itself.

Similarly, software failures have an indirect cost far beyond the
cost to repair the software.  For example, Scientific America
(November, 1998) reported the case where a crew member of the USS
Yorktown guided missile cruiser mistakenly entered a zero, causing a
divide by zero error in the computer software.  The cascading
failures of the interlinked computers on the ship eventually shut
down the propulsion system and left the ship dead in the water for
several hours. The actual cost to fix this error might be a few
thousand dollars, but the potential indirect costs could have been
enormous.  In a similar manner, failures in software projects within
your business can have indirect costs that result in lost revenue,
lost profits, lost market share, and lawsuits.

With this understanding of the big picture, it should be clear that
successful software development within your organization is
dependent on:

1. Selecting software projects that will enable the organization's
business strategy;

2. Ensuring that those projects are delivered in a timely manner and
to an appropriate standard of quality; and

3. Minimizing the cost of achieving the above objectives.

In the next section, we address the critical success factors needed
to achieve these results.

                 Software Critical Success Factors

Let's look at the software critical success factors as they apply to
each of the three dimensions of software cost (strategic, quality,
and implementation).  If we were to start by drawing a parallel with
the classic management parable about the workforce cutting a road
through a forest it would go like this.  The foreman cracking the
whip and screaming at the workers to "cut faster" would be focusing
on implementation costs. The job site supervisor walking around to
ensure that the work is done to proper standards so that it does not
need to be redone later or create problems during ultimate use is
focusing on quality costs.  The manager that climbs a tree, looks
around, and shouts down "wrong forest" is focusing on strategic

      Strategic Costs and Associated Critical Success Factors

The most critical, and in many ways the most difficult challenge to
overcome is to ensure that your software development dollars are
focused on those projects that will offer the maximum strategic
benefit to the organization.  The associated critical success
factors are as follows:

o Ensure that the organization's strategic direction is clearly and
correctly defined.

o Ensure that software projects are defined and evaluated relative
to their impact on the above strategy.

o Ensure that all members of the software team understand the
strategic objectives that the software project must fulfill.

       Quality Costs and Associated Critical Success Factors

Statistically, poor quality is the leading cause of software project
failure (poor estimates/plans is number two).  However, even if the
poor quality does not cause a complete failure the downstream costs
of poor quality can be staggering.  It is not unusual for an
organization to spend ten times the original implementation costs on
software maintenance.  It is not unusual for an organization to
spend one-hundred times implementation costs on resources dependent
on the software during deployment.  The quality of the software has
a huge impact on these downstream costs.

The problem is exasperated by the intangible nature of software.
During development, a project that is badly behind schedule and
under budget can quite easily appear to be on track.  All that is
required is for the team, intentionally or unintentionally, to
sacrifice quality along the way.  A requirement specification that
should take 6 weeks can be completed in 4 by leaving some language
vague.  A design specification that should take 4 months can be
completed in 3 by leaving out some details.  Software testing that
should take 5 months can be completed in 4 by not completely testing
everything.  In most cases, no-one is the wiser until deployment,
when the end users are left to clean up the mess.

The key quality related critical success factors are:

o Detailed requirement documentation and tracking;

o Thorough test planning;

o Defect tracking and reporting;

o Implementation of software processes; and

o Training and, if needed, consulting in the above areas as needed
to ensure consistent and proper usage.

Implementation Costs and Associated Critical Success Factors

Of course, implementation is where "the rubber meets the road".
Mistakes during implementation can easily cause problems including:

o Delivery of a product that fails to meet the strategic objectives
for that product;

o Delivery of a product that has poor quality and is difficult or
impossible to deploy and maintain;

o Failure to deliver any useful product at all; or

o Delivery of a product at a cost that is significantly higher than

Because high quality is a prerequisite to a successful
implementation project, the critical success factors described above
apply to the implementation costs as well.  The additional key
critical success factors applicable to the implementation phase

o Accurate estimating and planning;

o Configuration management, which is the process of managing and
controlling different versions of the application as it is created;

o Project management;

o Software and database design;

o Content management, which is the control of graphics, images,
text, and so on that will be used by the software application;

o Software data warehousing and executive reporting to support
status monitoring, alerting, trend analysis, industry comparisons,
and so on;

o Implementation of software development processes; and,

o Training and, if needed, consulting in the above areas as needed
to ensure consistent and proper usage.

Of course, all of this is easier said than done.  In the following
section we discuss some tools and techniques that will help make the
process successful.

                    Trained and Qualified People

The difference in development productivity between well qualified
and trained developers versus poorly qualified and trained
developers has been measured at a factor of 10 to 1 (some studies
put this as high as 25 to 1 in certain environments).  Training is
needed in:

1. Basic skills covering projects in general, including project
management, estimating, risk management, people skills, time
management, and consultative skills;

2. Skills specific to Information Technology projects, including
requirement definition, software design techniques, database design,
user interface design, quality assurance,   configuration
management, and testing; and

3. Skills specific to the technologies being deployed, including the
specific development language, middleware tools, report writing
tools, and the selected database management system.

There are many certification programs available both from
manufacturers such as Microsoft and Rational and from third parties
such as Brainbench.  These certification programs help guide a
training curriculum, provide quantifiable measures of success, and
serve as rewarding milestones for the participants.

                   Consistent, Optimum Processes

Just as an individual may be trained, and through training do a job
in a consistent, successful manner an organization itself may be
trained, and through training do a job in a consistent, successful
manner. In the case of an organization this is often called process
management.  It works as follows:

1. The organizational skills needed to be successful are itemized.
This can be done using one of the well known software process models
(e.g., the Capability Maturity Model) or it may be done informally.
The organizational skills needed for success will roughly follow the
individual skills as itemized above.

2. An assessment, or inventory, is taken of the organization's
current processes (skills) in each of the itemized areas.  During
this assessment, you will look at factors such as how successful the
current processes are; how consistently are the processes followed;
do the processes hold up when something goes wrong; and do the
processes stay intact with changes in personnel.

3. For those organization processes that are deficient, a
prioritized list is created and the processes are improved to meet
the expectations of the organization.  Metrics may be put in place
to measure process success over time and, as with a quality regime,
to identify statistical deviations from the norm and either correct
the problem (worse than expected results) or adjust the process to
take advantage of a new approach (better than expected results).

All of the above can be accomplished without anything more than the
most basic tools, however the right tools facilitate training, help
to improve processes, and make the entire exercise more effective.

                          Author Biography

      Mr. William Roetzheim is one of the world's leading
      experts on software project management and estimating,
      with over 26 years of relevant experience.  He has
      worked in a software metric/process management position
      for the US Navy, Honeywell, the MITRE Corporation, Booz
      Allen & Hamilton, and Marotz, Inc. He was the original
      author of the Cost Xpert product and holds two patents
      (one pending).

      Mr. Roetzheim has 15 published computer software books,
      including Software Project Costing & Schedule Estimating
      (Prentice Hall), The AMA Handbook of Project Management
      (American Management Association), Developing Software
      to Government Standards (Prentice-Hall), and Structured
      Computer Project Management (Prentice-Hall).  Mr.
      Roetzheim has over 90 published articles, has authored
      three computer columns, and has received 13 national and
      international awards.  He has an MBA and has completed
      the course work required for an MS in Computer Science.

      Mr. Roetzheim was the founder of the Cost Xpert Group.


                    eValid Updates and Specials

               Purchase Online, Get Free Maintenance

That's right, we provide you a full 12-month eValid Maintenance
Subscription if you order eValid products direct from the online
store:  <>

                 New Download and One-Click Install

Even if you already got your free evaluation key for Ver. 3.2 we
have reprogrammed the eValid key robot so you can still qualify for
a free evaluation for Ver. 4.0.  Please give us basic details about
yourself at:

If the key robot doesn't give you the keys you need, please write to
us  and we will get an eValid evaluation key
sent to you ASAP!

                     New eValid Bundle Pricing

The most-commonly ordered eValid feature key collections are now
available as discounted eValid bundles.  See the new bundle pricing
at:  <>

Or, if you like, you can compose your own feature "bundle" by
checking the pricing at:


Check out the complete product feature descriptions at:


Tell us the combination of features you want and we'll work out an
attractive discounted quote for you!  Send email to  and be assured of a prompt reply.


         Testing and Certification of Trustworthy Systems

 Part of the Software Technology Track at the Thirty-seventh Annual
          on the Big Island of Hawaii January 5 - 8, 2004

Full CFP details at:
<>.  Additional
detail on the web site: <>


The specification, development, and certification of trustworthy
computing systems hold great research challenges.  Modern society is
increasingly dependent on large-scale systems for operating its
critical infrastructures, such as transportation, communication,
finance, healthcare, energy distribution, and aerospace.  As a
result, the consequences of failures are becoming increasingly
severe.  These systems are characterized by heterogeneous
distributed computing, high-speed networks, and extensive
combinatorial complexity of asynchronous behavior.  Effective
methods for testing and certification of trustworthy systems are in
great demand.  This minitrack provides a venue for research results
and will contribute to their practical application in the software
systems of the future.

The minitrack focuses on advanced techniques for testing and
certification of trustworthy systems.  The following topics
represent potential research areas of interest:

* New techniques for testing and certification of software systems
* Testing and certification metrics
* Trustworthiness attributes like reliability, security, and survivability
* End-to-end integration testing methods and tools
* Test case generation
* Existence and correctness of testing oracles
* Object-oriented testing methods and tools
* Integrating quality attributes into testing and certification
* Engineering practices for testing and certification
* Automated tools for testing and certification support
* Testing in system maintenance and evolution
* Specification methods to support testing in system certification
* Roles and techniques for correctness verification in system certification
* Industrial case studies in testing and certification
* Technology transfer of testing and certification techniques


* Richard C. Linger, Software Engineering Institute, Carnegie Mellon
University , 500 5th  Avenue, Pittsburgh, PA 15213. Phone: (301) 926-4858

* Alan R. Hevner, Information Systems & Decision Sciences, College of
Business Administration, University of South Florida, 4202 East Fowler Ave.,
CIS1040, Tampa, FL 33620. Phone: (813) 974-6753 E-mail:

* Gwendolyn H. Walton, Dept. of Mathematics & Computer Science, Florida
Southern College, 111 Lake Hollingsworth Dr, PS Bldg Room 214, Lakeland, FL
33801. Phone: (863) 680-6283 E-mail:


   QSIC 2003: Third International Conference on Quality Software
         Friendship Hotel, Beijing, September 25-26, 2003

Software is playing an increasingly important role in our day-to-day
life.  However, software today -- unlike automobiles, bridges, or
office towers -- is produced without the benefit of established
standards. It is well known that there are still unresolved errors
in many of the software systems that we are using every day. The aim
of this conference is to provide a forum to bring together
researchers and practitioners working on improving the quality of
software, to present new results and exchange ideas in this
challenging area.

We solicit research papers and experience reports on various aspects
of quality software. See a list of topics of interests and
submission guidelines below. Submissions must not have been
published or be concurrently considered for publication elsewhere.
All submissions will be judged on the basis of originality,
contribution, technical and presentation quality, and relevance to
the conference. The proceedings will be published by IEEE Computer
Society Press. Selected papers will appear as a special issue of
Information and Software Technology, an international journal
published by Elsevier.

Topics include, but are not limited to:

   * Automated software testing
   * Configuration management and version control
   * Conformance testing
   * Cost estimation
   * Debugging
   * Economics of software quality and testing
   * Formal methods
   * Metrics and measurement
   * Model checking
   * Performance and robustness testing
   * Process assessment and certification
   * Quality evaluation of software products and components
   * Quality management and assurance
   * Quality measurement and benchmarking
   * Reliability
   * Review, inspection, and walkthrough
   * Risk management
   * Safety and security
   * Software quality education
   * Specification-based testing
   * Static and dynamic analysis
   * Testability
   * Testing of object-oriented systems
   * Testing of concurrent and real-time systems
   * Testing strategies, tools, processes, and standards
   * Tool support for improving software quality
   * Validation and verification
   * Application areas such as e-commerce, component-based
     systems, digital libraries, distributed systems,
     embedded systems, enterprise applications, information
     systems, Internet, mobile applications, multimedia,
     and Web-based systems


   Institute of Software, Chinese Academy of Sciences, China
   Software Engineering Group, University of Hong Kong, Hong Kong
   Centre for Software Engineering, Swinburne University of Technology,


    T.H. Tse, The University of Hong Kong, Hong Kong

Enquiries: Direct all enquiries to: 


     UNU/IIST: International Institute for Software Technology

UNU/IIST, the International Institute for Software Technology, is a
Research and Training Centre of the United Nations University.  It
serves developing countries to help them attain self-reliance in
software technology by training their young scientists and

UNU/IIST has a group of highly diverse, multi-national staff from
various backgrounds and cultures. It offers a pleasant and
stimulating environment at its new premises in Macao, and the
opportunity to train and work with people from many parts of the
world, both in Macao and on trips to developing countries.
Scientists currently working at UNU/IIST include He Jifeng and Chris
George. Dines Bjorner (the founding director) and Zhou Chaochen have
also worked here. UNU/IIST also attracts many long- and short-term
academic visitors, so there are good opportunities for research with
the staff and with others.  For more information about UNU/IIST,
please visit the UNU/IIST home page:

Macao is a Special Administrative Region of China, about 40 Km from
Hong Kong, the other side of the Pearl River estuary.  It is a
small, safe city, predominantly Chinese in culture but with strong
Portuguese influences and an active expatriate community of people
from many countries.  There are schools that teach in English,
Chinese, and Portuguese.

UNU/IIST currently has a vacancy for a Research Fellow, for which
applications are invited by 15 May 2003.  Contact:

  Selection Committee
  c/o Chris George
  Acting Director, UNU/IIST
  P.O. Box 3058, Macao
  Fax: +853 712 940

    ------------>>> QTN ARTICLE SUBMITTAL POLICY <<<------------

QTN is E-mailed around the middle of each month to over 10,000
subscribers worldwide.  To have your event listed in an upcoming
issue E-mail a complete description and full details of your Call
for Papers or Call for Participation to .

QTN's submittal policy is:

o Submission deadlines indicated in "Calls for Papers" should
  provide at least a 1-month lead time from the QTN issue date.  For
  example, submission deadlines for "Calls for Papers" in the March
  issue of QTN On-Line should be for April and beyond.
o Length of submitted non-calendar items should not exceed 350 lines
  (about four pages).  Longer articles are OK but may be serialized.
o Length of submitted calendar items should not exceed 60 lines.
o Publication of submitted items is determined by Software Research,
  Inc., and may be edited for style and content as necessary.

DISCLAIMER:  Articles and items appearing in QTN represent the
opinions of their authors or submitters; QTN disclaims any
responsibility for their content.

STW/Regression, STW/Coverage, STW/Advisor, TCAT, and the SR logo are
trademarks or registered trademarks of Software Research, Inc. All
other systems are either trademarks or registered trademarks of
their respective companies.

        -------->>> QTN SUBSCRIPTION INFORMATION <<<--------

To SUBSCRIBE to QTN, to UNSUBSCRIBE a current subscription, to
CHANGE an address (an UNSUBSCRIBE and a SUBSCRIBE combined) please
use the convenient Subscribe/Unsubscribe facility at:


As a backup you may send Email direct to  as follows:

   TO SUBSCRIBE: Include this phrase in the body of your message:

   TO UNSUBSCRIBE: Include this phrase in the body of your message:
           unsubscribe <Email-address>

Please, when using either method to subscribe or unsubscribe, type
the  exactly and completely.  Requests to unsubscribe
that do not match an email address on the subscriber list are

               Software Research, Inc.
               1663 Mission Street, Suite 400
               San Francisco, CA  94103  USA

               Phone:     +1 (415) 861-2800
               Toll Free: +1 (800) 942-SOFT (USA Only)
               FAX:       +1 (415) 861-9801
               Web:       <>