sss ssss      rrrrrrrrrrr
                      ssss    ss       rrrr   rrrr
                     sssss     s       rrrr    rrrr
                     ssssss            rrrr    rrrr
                      ssssssss         rrrr   rrrr
                          ssssss       rrrrrrrrr
                    s      ssssss      rrrr  rrrr
                    ss      sssss      rrrr   rrrr
                    sss    sssss       rrrr    rrrr
                    s  sssssss        rrrrr     rrrrr
         +=======    Quality Techniques Newsletter    =======+
         +=======              July 2004              =======+

subscribers worldwide to support the Software Research, Inc. (SR),
eValid, and TestWorks user communities and to other interested
parties to provide information of general use to the worldwide
internet and software quality and testing community.

Permission to copy and/or re-distribute is granted, and secondary
circulation is encouraged, provided that the entire QTN
document/file is kept intact and this complete copyright notice
appears in all copies.  Information on how to subscribe or
unsubscribe is at the end of this issue.  (c) Copyright 2004 by
Software Research, Inc.


                       Contents of This Issue

   o  eValid: Recent News and Updates

   o  Retrospective on Software, by Boris Beizer (Part 2/2)

   o  Developing Trustworthy Sofwtare for Safety Critical Systems

   o  E-Commerce Technologies (Special Track at SAAC-2005).

   o  International Journal of Web Services Research

   o  eValid: A Quick Summary

   o  Infinity 204: Workshop on Verification of Infinite State

   o  Semantic Web Challenge 2004

   o  QTN Article Submittal, Subscription Information


                  eValid: Latest News and Updates

eValid is the premier WebSite Quality Testing & Analysis Suite.
eValid solutions help organizations maintain e-Business presence,
improve website quality and performance, reduce down time, prevent
customer loss, and control your costs.

eValid's Web Analysis and Testing Suite is comprehensive, yet
scalable and easy to use, and applies to a wide range of web
applications.  Built entirely inside an IE-compatible browser,
realistic viewer experience results are 100% guaranteed.

                 Political Party WebSites Analyzed
In an unusual academic application, the eValid engine is being used
as part of the Kubernetes Project being run by Ph. D. candidate
Andrea Ricci.  See his description at:

The application is to analyze the websites of 1200 political parties
worldwide. The research project is coordinated by two co-tutors
serving at three Universities: Prof. Jan Servaes, Katholieke
Universiteit van Brussel, Belgium and the University of Queensland,
Australia; and Prof. Francois Heinderyckx of Universite Libre de
Bruxelles, Belgium.

Here are the relevant pages for Prof Servaes:

Similarly, for Prof. Heinderyckx:

               New eV.Manager Command Line Interface
We have added a new command line interface to eV.Manager. The new
interface gives you more flexibility if you want to use eV.Manager
to run a comprehensive website test suite in unattended mode, e.g.
for overnight runs.

eValid users may click:

        Help > Documentation > User Manual

and then navigate to the eV.Manager command line options by

        Technical > Details > Command Line

You'll find the link to the eV.Manager Batch Mode material in the
index at the top of the page.

            eValid Chosen for Use In CMU/MSE Coursework
eValid has been chosen for use in student projects as part of the
Master of Software Engineering program at Carnegie-Mellon University
(CMU). Students in the program apply full eValid licenses to support
their web application development projects and to analyze completed

The technical contact for the eValid use is Prof. Mel Rosso-Llopart,
Associate Director of the Masters of Software Engineering program
and a Senior Lecturer in the CMU distance education program.

Read about the CMU/MSE program at:

              Updated BlueRiverStone Report on FT-150
There is a new update on the BlueRiverStone report that analyzes 150
websites, with some very interesting new results -- and lots of
changes.  Read the Financial Mail Report for full details and
complete website ranking data -- including changes since the first
set of runs. See:

                 Product Download Location, Details
Here is the URL for downloading eValid if you want to start [or re-
start] your evaluation:

                   Contact Us With Your Questions
We welcome your questions about eValid and its applications.  We
promise a response to every question in one business day if you use
the WebSite request form:


                     Retrospective on Software
                         From the Year 2011
                           (Part 2 of 2)

      Prediction is extremely difficult. Especially about the
      future.  (Neils Bohr)

      Note:  This article is taken from a collection of Dr.
      Boris Beizer's essays "Software Quality Reflections" and
      is reprinted with permission of the author.  We plan to
      include additional items from this collection in future

      Copies of "Software Quality Reflections," "Software
      Testing Techniques (2nd Edition)," and "Software System
      Testing and Quality Assurance," can be purchased
      directly from the author by contacting him at


2.5.  Publishers

2.5.1.  What They Do

The publishers write software but they don't innovate it.  There are
still hardware and operating system differences, although they're
transparent to us.  Each package is tailored to the platform.  Also,
there are the language and culture localization changes.  That's the
publisher's job: to take prototypes from authors and convert them
into maintainable products that work on every platform and in any
marketable tongue. Publishers have armies of programmers to do this.
Publishers still maintain and create infrastructure software such as
operating systems, compilers, and drivers but lately they've been
spinning off their in-house infrastructure programming to
freelancers as rapidly as they can.

The publisher's strength is a big library of well-tested, reliable,
objects.  Publishers tout the virtues of their object libraries to
authors and distributors.  Objects are created by in-house
programmers but also developed by independent authors or obtained
under license from other publishers. Associated with each object is
another object, a test jig  that is typically created independently
and whose cost ranges from equal to five-times the cost of the
object.  The quality and effectiveness of the test jigs are also a
publisher's strength.

Publishers do most of the copious unit, integration, and system
testing, but little compatibility and configuration testing.  In
that respect, their main concern is to assure that their completed
software will pass the distributor's sometimes brutal independent
testing.  Taken together, testing and QA activities consume roughly
75% of the publishers' labor.  That's despite using highly automated
 The industry standard for all testing from unit to system is from
two to three test cases per compiled token (e.g., twelve to fifteen
test cases per line of code).

2.5.2.  Who They Are

Software publishers came from the old software industry, but also
from the "creative" publishing industry such as music, books and
entertainment. There's no longer a clear distinction between various
kinds of publishers.  Book publishers dabble in software, software
publishers produce CyberRock musicals, and music publishers are
pushing multimedia books.

There are thousands of publishers worldwide, but that number is
rapidly shrinking as consolidation continues.  This is especially so
in software-intensive publication where high labor costs in Europe,
the US, Canada, and Japan are driving the publishers to use
experienced programmers in China, Hungary, India, Indonesia,
Ireland, Mexico, and Russia.  This is possible because most
programming work is routine and relies increasingly more on
knowledge of proprietary object libraries rather than general
programming.  Jobs have been lost in this area, but as heavily in
Japan and Europe as in the US.

2.5.3.  How They Sell Their Work

Publishers license their work as follows:  91% licensing to
distributors;  8% licensing to service companies, and 1% licensing
and direct sales to individual users.  The latter is rare because of
the high cost.  A word processor package, say, which would
 have an equivalent sale price of $30 when used through a service
company sells for about $25,000 and up.  Consequently, the direct
user license is bought only by big corporate users.

2.5.4.  What They Deliver

Publishers deliver fully tested, integrated, working software and
behavioral test suites tailored to a variety of platforms.  They
create the variants needed for localization, for networks, hardware
variations, distributor-specific requirements, etc.  They also give
the distributors complete access to certified unit/component,
integration, and behavioral test suites that the distributor can
call on when needed. Platform-specific integration testing is done
by the publisher, but not application compatibility testing or

Publishers make the implementation decisions including whether to go
hardware or software.  A high frequency routine with broad
distribution may be compiled as hardware instead of software. For
example, I noticed in my recent download that SUPERSQUEEZE had
finally been hardwired: it had been hardwired for over two years in
home computers where the smaller RAM and disc space made it
essential.  Similarly, parts of the translator and printer driver
have been recently hardwired.

Implementation decisions (hardwired, software, object) are based on
user profile statistics gathered from the users via the service
companies and distributors. Implementation will typically differ
from distributor to distributor because of different user groups
serviced by the distributors.

2.5.5.  Support and Service

Publishers deal almost exclusively with distributors. They provide
third-tier support to users.  That is, user problems that can't be
solved by the service companies are passed to the distributors who
in turn may pass them on to the publisher if necessary.  There are
exceptions, of course, such as big corporate users who may have
direct support contracts with publishers.
 Publishers maintain their own products.  Bugs reported by one
distributor will be corrected and updates sent to all other
distributors that have bought that package.  The interface between
publishers and distributors is direct, on-line, and mostly
automated.  The need to provide this highly-technical service and
interface has driven many small publishers out of business.

2.5.6.  Tools and Technology Usage

Another factor that contributed to the erosion of traditional
programmer employment in the developed nations is the success of the
object-oriented programming paradigm.  At first, in the early 90's,
the creation of reliable object libraries was hindered by attitudes
and technology.  The attitude problems were: the widespread myth
that object-oriented programming was cheaper and easier; an
unwillingness to face the high cost of object creation and testing
compared to ordinary software.  Once it was accepted that library
objects were more expensive to create and test, and that each object
was a capital investment, real library creation became possible.
The technical barrier in the early 90's had been that we had only
hazy ideas of how to test objects.  Breakthroughs by 1995 gave us
guidance on how such objects should be tested.  Also, the publishers
recognized and accepted the idea that test and verification of
objects by means of a complementary test jig for every object was
the key to object test and verification -- despite a test jig cost
several times higher than the object it tested.

      At first, programmer employment at the publishers rose as the
object libraries were being created.  Critical mass was reached in
1995 -- the point at which the libraries were big enough to meet the
promise of OOP.  At that point, with fewer new objects to create,
programming employment would have dropped precipitously except that
the slack was being taken up by Y2K remediation. The jobs that did
remain (other than y2K service) depended less on technical
programming know how and more on object library knowledge and were
therefore easy to take offshore or to assign to entry-level people.
Experienced programmers in the developed countries had worked
themselves out of jobs in their traditional form.

There is almost no new technology now used by publishers that wasn't
around twenty years ago.  The difference is that the technology is
used and the acceptance of global standards (good or bad) that
everyone follows.  The difference, compared to a decade ago, is
quantitative rather than qualitative. The survivors have been quick
to accept and exploit the existing technologies.  That holds both
for publishers and individual programmers.  Especially significant
has been exploitation of test and verification execution and design

2.6.  Distributors

2.6.1.  What They Do

The distributors don't write application or infrastructure software.
They lease packages from publishers.  They configure, install, and
do compatibility testing on other people's software according to the
standards of the service companies. Distributors work for more than
one service company. There are hundreds of distributors which
average several hundred employees -- big distributors such as NP
have several thousand employees.  Some service companies have their
own distributor subsidiary, but that doesn't prevent them from using
independent distributors for specialized (e.g., corporate) accounts.
Some distributors are user industry-specific and provide vertical
support (e.g., retailers, florists, CAD/CAM, dentists, etc.) while
other are broader based (horizontal).  Service companies select the
distributors that best match their users' profiles.

2.6.2.  Who They Are

Many software people (but note, not programmers) work for
distributors.  Entry level people are concerned mainly with setup,
maintenance, and testing, while mid-level workers debug nasty
compatibility problems that do crop up.  About half the distributor
work force is concerned with writing and monitoring independent
testing of the software they buy. Distributors do write software,
but it's highly technical and concerned with automated installation
and remote configuration testing and debugging.  I'm sure that
you've noticed that downloads seem to go in three phases.  First the
download, which takes twenty minutes say; then your computer is out
of action for a half hour as the download is tested; finally there's
a five minute call back to the distributor to report settings,
problems, etc.  There's probably another few minutes of processing
and analysis back at the distributor's and if there's a problem,
there will be more processing.  Sometimes, a human analyst may be
called in to check a problem out -- usually long before you spot a
symptom.  Even with automation, there's work for humans.  But as the
software continues to get more complicated, more of the human
configuration debugging job is automated and it's the distributors
that write most test/debug automation software.

The top jobs at a distributor are the software buyers.
 Everybody wants to be a software buyer.  They have the clout in the
industry although few outsiders known them by name.  The buyers set
standards, define human interfaces, specify touch-and-feel, controls
when features are introduced and how, and define new applications.
We call them buyers but they're more like Hollywood movie producers.
Some buyers drive the same fancy cars and sport the same flashy
diamond pinkie rings --  ugh!  And for home computing, the boundary
with entertainment is so blurred that it's no wonder that home
service buyers sound like Hollywood press agents.

The distributor labor force consists of two main groups: a big group
of para-professionals and a smaller group of programmers.  Para-
professionals typically have a two-year degree in computer
technology and some programming knowledge.  The shrinking
professional cadre has the same kind of Computer Science or Software
Engineering training that they had ten years ago.  The biggest
change that has occurred among the professionals is that they are
content and even challenged by the problems of working with,
integrating, and testing software that they did not themselves
write.  They are content with the idea that they may never originate
a piece of software. One reason many older programmers lost their
jobs is that they couldn't accept these 21st century realities and
stuck to the outmoded, mythical, view of programming as "everyone an

The para-professional software technician is the fastest growing job
segment of the software industry. As installation and maintenance
problems become more routine and automated, there is a decreasing
need for professional programmer intervention.  Here too, the
programmers are working themselves out of jobs as they automate and
make it possible to provide support by less qualified para-
professional technicians.

Distributors have not gone offshore because they must work with the
users' native languages.  Japanese distributors service the Japanese
market while American distributors service the North American
market.  Specialized, vertical distributors, for biological
researchers, say,  are multinational and may be distributed over a
wide geographical region (e.g., the whole world), but that's because
vertical distributorship requires so much specific application

2.6.3.  How They Sell Their Services

Distributors sell their services to the service companies on a usage
royalty basis.  The deals are made between the distributors and the
service companies as a comprehensive contract covering all software,
testing, installation, maintenance, and user support for a specified
period at a specified rate for each kind of service provided.  Fees
may range from a few hundredths of a penny for looking at a file or
using an object to dollars/per minute for technician time.  Almost
nothing is sold outright.  This may seem complicated and may seem to
require too much fine-grained bookkeeping but it is based on
procedures developed for music-industry royalties years before the
first commercial computer.

2.6.4.  What They Deliver

The distributor delivers everything but the hardware:  software,
user-specific tuning, user-specific installation and testing,
compatibility testing, and technical support.  The download I got at
Narita was created, configured, tested, and installed by a
distributor.  The distributor selects the packages, preinstalls them
in a simulator, runs the general compatibility tests, runs specific
tests (e.g., files) based on prior crash reports,  and then
downloads the whole lot to me with any tests that must be run on my
system.   The distributors are the real service providers.

The main differences between the software delivered by the publisher
to the distributor and by the distributor to the service company
(and therefore to me) are the result of: configuration, general
compatibility testing, tuning, and user-specific testing -- almost
all of which are automated.  The distributor maintains my user
profile, configuration specifics and preferences for every package I
use, all crash reports and diagnostics which my system has uploaded,
and all other information relevant to me. If I switched to a
different service company, I'd probably still get my computational
services from the same distributor.

2.6.5.  Support and Service

Distributors provide the second tier support for problems that can't
be handled by the service companies.  I'm not aware of most of the
problems they handle for me because any crash, data loss, or data
corruption will result in information relayed back to them for
action, often before I'm aware of symptoms. If possible, the problem
is fixed and the correction scheduled for the next download.  If the
problem is generic to the application there will be many comparable
problem reports from other users and the correction in my system
will await an update from the publisher.

The second tier support concerns technical or application-specific
inquiries that can't be handled by the service company's
technicians, which require human intervention, or that don't fit
into a preexisting service category.  Getting such service is
frustrating because there are several levels to plow through before
I get a human.  However, human service is so rarely needed that I'm
not often frustrated.

Let's say that I think that I have a problem with an application.
My first step will be to send a message concerning the problem to
the service company.  Before that message goes out, however, my on-
line HELP files are scanned and if there's an entry that answers my
query, the information will be displayed or they may activate the
tutorial package: a gentle, but automated, way to remind the user to
"READ THE MANUAL, DUMMY!"  If that doesn't do it and if I insist
that the query gets sent to service company, the next step may be
that they will download an extended HELP file for that application.
If I still need help, I'll have to go through a service company
representative to get through to the distributor.

The distributor has comprehensive application-specific and user-
specific information and more sophisticated expert systems to handle
my inquiry.  An even more copious HELP file may be on the way.  If I
get past that point, I may be connected to a technician who is a
specialist on that application and who can get more information
about me than I want anybody to know in a few keystrokes.  After
that, as it was ten years ago, you're passed along an evermore
knowledgeable hierarchy of service people.  The nicest thing about
the service, though, is that they have my user-specific information
on line so I don't waste time answering the same questions
repeatedly and I don't get patronizing service from someone who
underestimates my knowledge.  The bad part about the service is the
higher up the technical ladder it goes the more it costs -- and
costs, and costs, and costs. Service costs beyond an almost useless
minimum are over-and-above the basic monthly charges.

2.6.6.  Tools and Technology Usage

Huge data bases of user-specific information backed up by expert
systems automate most service requests.  The key technological
components are the test suites. There are four kinds of test suites:
application specific, generic compatibility, user-specific
compatibility, and user-specific.

+   Application Test Suites.  These are behavioral (black-box) test
suites that may have been created by publishers, by distributors, or
by both, but which now are maintained by every distributor.  They
are proprietary products in their own right.  The importance of
these test suites to the distributor is that without them, it would
be impossible to switch to a different publisher for any given
application. Proprietary application test suites give the
distributor clout and prevents any one publisher from getting a
stranglehold on a popular application. Because the distributors'
test suites include all the tests of all the publishers they use for
that application, these suites are more thorough than the tests of
any one publisher.

+   Generic Compatibility Test Suites.  The distributor has
statistics on how packages will be loaded and which combinations
have caused problems in the past.  While the obvious compatibility
problems of the past caused by violating operating system rules and
similar primitive stuff are almost gone now, compatibility problems
remain and will probably be with us forever.  Sources of
incompatibility are things such as protocol translation,
synchronization, and the fundamental fact that no program can be
complete.  The generic test suites are applied to combinations of
applications based on usage statistics gathered by the service
companies.  All the combinations needed to cover 99.95% of all
installations are routinely tested. Compatibility problems may delay
the introduction of a new product and therefore royalties.
Distributors have adopted a first-come priority: meaning that
although the problem turns out to be in a package already installed,
the introduction delay will be suffered by the newcomer. The burden
is put on the newcomer to provide the workaround or fix, but
cooperation between publishers is strongly encouraged.

+   User-Specific Compatibility Test Suites.  A sharper set of
compatibility tests involving three, four, or more coexisting and
communicating applications are run on a simulator of my
configuration and perhaps, also on my actual system. These test
suites are automatically derived from application test suites and
generic compatibility test suites.  The distributors don't want to
deal with user problem because despite the big bucks they charge for
service, the profit margin for human service is low compared to the
profit in running software.  The distributors learned that running
user-specific compatibility tests is the key to  profit.

+  User-Specific Tests.  Everything is run under capturelayback
these days.  All systems have built-in checkpointing and recovery.
And all systems, eventually go on line with the service company.
Sometimes, as you've noticed, the screen freezes and the system
stops for a few seconds.  Typically that means that you've had a
crash or some other problem. The system falls back to the checkpoint
data, does a recovery, and uses the stored data in the
capturelayback buffer to run the scenario over.  If the same
scenario crashes again, a third attempt is made.  The third crash is
considered to be a hard bug.
 Pertinent data is dumped  in a bug file, along with the captured
data that triggered the problem.  The data are later uploaded to the
distributor via the service company and may be incorporated into
your library of user-specific tests in addition to being passed up
the line to the publisher if that is warranted.  You're not aware
that this is happening because the fourth recovery attempt will be
done in a different order, but still based on the captured data, so
that usually you're back on line.  But you know, of course, about
those occasional what the hell?  situations.

There's a lot of technology in these test suites and even more
technology used to maintain the suites as applications are
maintained.  Most test suite maintenance, from generic compatibility
to user-specific, is done automatically based on algorithms first
published in the late 80's and early 90's.  Without automation,
user-specific testing would be impossible as would the reliability
and service we get today.  Distributors who ignored test suite
maintenance technology are no longer with us.  Those who understood
that technology, who exploited it, who expanded it and made it
practical, now dominate the distributorships.

2.7.  Service Companies

2.7.1.  What They Do

The service companies are at the visible top of the industry: the
companies you and I deal with.  They provide not just computation
but also communications and entertainment.  I don't buy the four
services (computing, long distance and local communications, and
video) as a bundle because I'm old fashioned.  I want computing from
a computer service company, telephones from AT&T, and video from my
cable company.

But that won't last.  AT&T is pushing a discount to merge my
computing and long-distance.  The local telephone operating company
is set to exploit the fiber optics lines that they wired in the
neighborhood, so my cable company may go next.  I'll be down to two
vendors instead of the present four.

2.7.2.  Who They Are

Forty giants share the industry today but in another decade it will
be down to a handful of supergiant multinationals.  The giants have
been created by mergers of every conceivable permutation and
combination of: communications carriers, entertainment, hardware,
software, finance and credit card companies.  You name it.

2.7.3.  How They Sell Their Products

You pay by the month, you pay by the usage, but mostly you pay and
pay.  And despite the continued drop in cost for base services and
popular applications, you know and I know that you'll be paying more
each month next year than you did last yearpaying more and loving

2.7.4.  What They Deliver

They deliver hardware and software.  The software is whatever can be
squeezed on a fiber-optic cable by a fast modem.  Computation may be
what started it, but now its entertainment, newspapers, stock
reports -- everything promised in the 70's.  The hardware is
whatever I need to run the software. Right now its mostly computers,
but if you subscribe to an all-inclusive service, it's your
computer, TV, stereo, home alarm and the pundits are still claiming
that in the future it'll be your whole house.

2.7.5.  Support and Service

I'll stick mainly to computational services and not deal with all
the other services you can get.  For computation, they provide the
first tier of service. That's usually hand holding stuff for
novices, telling you to run the tutorials, and how to find the
"ANYKEY."  The service personnel are typically high-school graduates
who are well spoken and polite in your native language.  But I
suspect that some of them have been replaced by voice recognition
and expert systems of late.  The overwhelming majority of service
calls are handled at this level.  About 1% get passed through to the
appropriate distributor. Although the service companies are the main
computational suppliers, they have almost no programmers on the

2.7.6.  Tools and Technology Usage

The technology is 21st century telecommunications / computational
technology.  Huge broad-band circuit-switching exchanges,
international networks, TV broadcast stations . . . . . . .

3.  So What Happened to The Industry?

3.1.  The Present Realities

What happened to the software industry and the 1,000,000 programmers
it had employed?  It evolved. The programmers either evolved or
joined the unemployment lines.

+   Many jobs in the new industry did not need a Computer Science or
Software Engineering degree, but could be handled by third-world
para-professionals and technicians.

+   Most people trained as programmers will never write a piece of
original code.  Most employment of professionals in the old industry
was and is in the new industry, in testing, integration,
configuration, and maintenance and not in programming.

+   The few programmers (at publishers) who write new code write new
code for applications whose creative aspects were done by someone
else -- the authors.

+   Very few programmers ever make it to the ranks of world-class
authorship.  And those who do are content with the idea that they
will not write the software that embodies their creation.

About half the people employed in the Western software industry in
1999 could adapt to the new realities and either found other outlets
for their creative urges or learned to recognize the subtler but
equally challenging creativity of the 21st century.

3.2  Winners and Losers

What are, in retrospect, the characterizing differences between the
winners and losers in the 21st Century software industry?

The Losers

+      No clear vision of the future.  A complacent belief in
gradual evol-tion.

+   Reacted to events and were swept along by changes.

+   Adopted commercial products only after cost-benefits had been
clearly established by the experience of others.

+   Ignored emerging technologies and dismissed them as "merely
theory."  "Yes, but can you prove to me that it's practical?"

+   Made few mistakes.

+   Software development and testing is mostly manual and labor

+   Continued creating ego-intensive software based on individual
craftsmanship and the mostly mythical programmer/author image.

+   Left training to chance, to one-on-one apprenticeships on-the-
job, and to an overblown estimate of the importance of prior

+   Writing new software is where it's at.  Testing and QA is for

The Winners

+   Had a written vision of the industry's future and their part in

+   Anticipated change and planned for it: personally and

+   Were aggressive in adopting and adapting commercial technology
despite their flaws.  Pushed the vendors into new products.

+   Made strategic alliances with researchers, provided test beds
for new ideas and actively contributed to making the technology

+   Made many expensive mistakes.

+   Whatever can be automated eventually will be.

+   Accepted the cooperative nature of software engineering and
found new channels and challenges for the programmers' creative

+   Accepted the idea of continued education (personal and
institutional) as an essential ingredient of software

+   Focused on testing and quality assurance technologies as the
strategic investment for the 21st century.

Citation: The citations in this essay are not intended to be
comprehensive, they are just a sample of about 100 times as many
comparable stories and articles published in the 1983-1993 period.
I investigated additional citations for the 1994-1999 period, but
they added nothing new, only confirming the trends that were clearly
in place when the original version was written.


    Developing Trustworthy Software for Safety Critical Systems

There will be a special workshop on "Developing Trustworthy Software
for Safety Critical Systems" put on by the IEEE Reliability Society
at Dulles Airport, Thursday and Friday, Sept 9 and 10.  We are most
fortunate to have four recognized experts to teach, mentor and help
apply the best practices to mitigate software risk.

Please see details at:


                      E-commerce Technologies
Special Track at the 20th ACM Symposium on Applied Computing, SAC 2005
                         March 13-17, 2005
                     Santa Fe, New Mexico, USA

For the past nineteen years the ACM Symposium on Applied Computing
(SAC) has been a primary forum for applied computer scientists,
computer engineers and application developers to gather, interact,
and present their work. SAC is sponsored by the ACM Special Interest
Group on Applied Computing (SIGAPP); its proceedings are published
by ACM in both printed form and CD-ROM; they are also available on
the web through ACM's Digital Library. More information about SIGAPP
and past SACs can be found at

              Special Track on E-Commerce Technologies

A few years ago, e-commerce applications were focused primarily on
handling transactions and managing catalogs. Business requirements,
however, are evolving beyond transaction support to include content
management, personalization, integration, and marketplace
enablement.  The track will focus on technologies currently employed
in creating offerings, the latest developments in the electronic
marketplace, on computational and deployment issues, architectural
support, policies, and advanced solutions and practices. The track
is intended to address the current needs of both researchers and
practitioners, and to identify significant research challenges that
will most beneficially impact the future use of e-commerce

The topics of interest include, but are not limited to:

      Electronic Auctions
      Agent Technology for E-Commerce
      User's Preference Elicitation
      Recommender Systems
      Formal Methods in e-commerce
      Security Aspects
      Mass Personalization Technologies
      User Modeling and Customer Profiling
      Electronic Contracting and Electronic Negotiation
      Mobile E-Commerce Applications
      Trust and Reputation Systems in E-Commerce
      Privacy and Anonymizing Applications
      Data Mining for E-Commerce
      Semantic Web Enabled E-Commerce
      Electronic Payments


Papers accepted for the Special Track will be published by ACM both
in the SAC 2005 proceedings and in the Digital Library.


     The International Journal of Web Services Research (JWSR)

A Publication of Idea Group Publishing/Information Science Publishing, USA


 Editor: Liang-Jie (LJ) Zhang, IBM T.J. Watson Research Center, USA

The International Journal of Web Services Research (JWSR) is a
high-quality refereed journal on Web services research and
engineering that serves as an outlet for individuals in the field to
publish their research as well as interested readers. As a research
and engineering journal, the International Journal of Web Services
Research, will facilitate communication and networking among Web
services/e-Business researchers and engineers in a period where
considerable changes are taking place in Web services technologies
innovation, and stimulate production of high-quality Web services
solutions and architectures.

Web services are network-based application components with
services-oriented architecture using standard interface description
languages and uniform communication protocols. Due to the importance
of the field, standardization organizations such as WS-I, W3C, OASIS
and Liberty Alliance are actively developing standards for Web
services. The International Journal of Web Services Research (JWSR)
is the first refereed, international publication featuring only the
latest research findings and industry solutions dealing with all
aspects of Web services technology. The overall scope of this
journal will cover the advancements in the state of the art,
standards, and practice of Web services, as well as to identify the
emerging research topics and define the future of Services
computing, including Web services on Grid computing, Web services on
multimedia, Web services on communication, Web services on e-
Business, etc. In conclusions, the JWSR provides an open, formal
publication for high quality articles developed by theoreticians,
educators, developers, researchers and practitioners for
professionals to stay abreast of challenges in Web services

SCOPE: Topics of interest include, but are not limited to, the

      * Mathematic foundations for service oriented computing
      * Web services architecture
      * Web services security and privacy
      * Frameworks for building Web services applications
      * Composite Web services creation and enabling infrastructures
      * Web services discovery, negotiation and agreement
      * Resource management for Web services
      * Solution management for Web services
      * Dynamic invocation mechanisms for Web services
      * Quality of service for Web services
      * Cost of service for Web services
      * Web services modeling
      * Web services performance
      * UDDI enhancements
      * SOAP enhancements
      * Case studies for Web services
      * e-Business applications using Web services
      * Grid based Web services applications (e.g. OGSA)
      * Business process integration and management using Web services
      * Multimedia applications using Web services
      * Communication applications using Web services
      * Interactive TV applications using Web services
      * Semantic services computing
      * Business Grid


                      eValid: A Quick Summary

eValid technology incorporates virtually every quality and testing
functionality in a full-featured browser.  Here is a summary of the
main eValid benefits and advantages.

  o InBrowser(tm) Technology.  All the test functions are built into
    the eValid browser.  eValid offers total accuracy and natural
    access to "all things web."  If you can browse it, you can test
    it.  And, eValid's unique capabilities are used by a growing
    number of firms as the basis for their active services
    monitoring offerings.

  o Mapping and Site Analysis.  The built-in WebSite spider travels
    through your website and applies a variety of checks and filters
    to every accessible page.  All done entirely from the users'
    perspective -- from a browser -- just as your users will see
    your website.

  o Functional Testing, Regression Testing.  Easy to use GUI based
    record and playback with full spectrum of validation functions.
    The eV.Manager component provides complete, natural test suite

  o LoadTest Server Loading.  Multiple eValid's play back multiple
    independent user sessions -- unparalleled accuracy and
    efficiency.  Plus: No Virtual Users!  Single and multiple
    machine usages with consolidated reporting.

  o Performance Tuning Services.  Outsourcing your server loading
    activity can surely save your budget and might even save your
    neck!  Realistic scenarios, applied from multiple driver
    machines, impose totally realistic -- no virtual users! -- loads
    on your server.

  o Web Services Testing/Validation.  eValid tests of web services
    start begin by analyzing the WSDL file and creating a custom
    HTML testbed page for the candidate service.  Special data
    generation and analysis commands thoroughly test the web service
    and automatically identify a range of failures.

  o Desktop, Enterprise Products.  eValid test and analysis engines
    are delivered at moderate costs for desktop use, and at very
    competitive prices for use throughout your enterprise.

  o HealthCheck Subscription.  For websites up to 1000 pages, eValid
    HealthCheck services provide basic detailed analyses of smaller
    websites in a very economical, very efficient way.

  o eValidation Managed Service.  Being introduced soon.  the
    eValidation Managed WebSite Quality Service offers comprehensive
    user-oriented detailed quality analysis for any size website,
    including those with 10,000 or more pages.

       Resellers, Consultants, Contractors, OEMers Take Note

We have an active program for product and service resellers.  We'd
like to hear from you if you are interested in joining the growing
eValid "quality website" delivery team.  We also provide OEM
solutions for internal and/or external monitoring, custom-faced
testing browsers, and a range of other possibilities.  Let us hear
from you!


                           INFINITY 2004
6th International Workshop on Verification of Infinite-State Systems
               (A Satellite Workshop of CONCUR 2004)


                     Saturday 4 September 2004
                          London, England

The aim is to provide a forum for researchers interested in the
development of mathematical techniques for the analysis and
verification of systems with infinitely many states.

The proceedings of INFINITY 2004 will be available at the workshop,
published as a Research Report of the University of Edinburgh School
of Informatics. After the workshop authors will be invited to place
fuller papers (of at least ten pages) in a volume of the Electronic
Notes in Theoretical Computer Science series (ENTCS).


                    Semantic Web Challenge 2004


             The International Semantic Web Conference
               Hiroshima, Japan, November 7-11, 2004

  Please visit also IEEE IS (
    for the May/June issue about the Semantic Web Challenge 2003

                What is the Semantic Web Challenge?

How would you explain to your grandparents what the Semantic Web is?
What possibilities do the current techniques give us? We already
have quite some infrastructure, languages, reasoning engines, etc.
that enable us to develop integrated, useful, and attractive

The "Semantic Web Challenge" has been initiated in 2003 to support
this development and to serve several purposes:

- Help us illustrate to society what the Semantic Web can provide
- Give researchers the possibility to compare results

                         What is the Goal?

The overall objective of the challenge is to apply "Semantic Web
techniques" in order to build an online application that integrates,
combines, and deduces information needed to assist users in
performing tasks. The challenge will be updated annually, according
to the development of the Semantic Web.

The challenge intentionally does not define specific data sets
because the potential applicability of the Semantic Web is very

Therefore, a number of minimal criteria have been defined which
allow people to submit any type of ideas in the form of an
application. In addition to the criteria, a number of specific
desires have been formulated. The more desires are met by the
application, the higher the score will be. Please visit the above-
mentioned web site to find more details about the minimal
requirements and the desires.
    ------------>>> QTN ARTICLE SUBMITTAL POLICY <<<------------

QTN is E-mailed around the middle of each month to over 10,000
subscribers worldwide.  To have your event listed in an upcoming
issue E-mail a complete description and full details of your Call
for Papers or Call for Participation at

QTN's submittal policy is:

o Submission deadlines indicated in "Calls for Papers" should
  provide at least a 1-month lead time from the QTN issue date.  For
  example, submission deadlines for "Calls for Papers" in the March
  issue of QTN On-Line should be for April and beyond.
o Length of submitted non-calendar items should not exceed 350 lines
  (about four pages).  Longer articles are OK but may be serialized.
o Length of submitted calendar items should not exceed 60 lines.
o Publication of submitted items is determined by Software Research,
  Inc., and may be edited for style and content as necessary.

DISCLAIMER:  Articles and items appearing in QTN represent the
opinions of their authors or submitters; QTN disclaims any
responsibility for their content.

TRADEMARKS:  eValid, HealthCheck, eValidation, TestWorks, STW,
STW/Regression, STW/Coverage, STW/Advisor, TCAT, and the SR, eValid,
and TestWorks logo are trademarks or registered trademarks of
Software Research, Inc. All other systems are either trademarks or
registered trademarks of their respective companies.

        -------->>> QTN SUBSCRIPTION INFORMATION <<<--------

To SUBSCRIBE to QTN, to UNSUBSCRIBE a current subscription, to
CHANGE an address (an UNSUBSCRIBE and a SUBSCRIBE combined) please
use the convenient Subscribe/Unsubscribe facility at:


               Software Research, Inc.
               1663 Mission Street, Suite 400
               San Francisco, CA  94103  USA

               Phone:     +1 (415) 861-2800
               Toll Free: +1 (800) 942-SOFT (USA Only)
               FAX:       +1 (415) 861-9801
               Web:       <>