sss ssss      rrrrrrrrrrr
                      ssss    ss       rrrr   rrrr
                     sssss     s       rrrr    rrrr
                     ssssss            rrrr    rrrr
                      ssssssss         rrrr   rrrr
                          ssssss       rrrrrrrrr
                    s      ssssss      rrrr  rrrr
                    ss      sssss      rrrr   rrrr
                    sss    sssss       rrrr    rrrr
                    s  sssssss        rrrrr     rrrrr
         +=======    Quality Techniques Newsletter    =======+
         +=======             August 2005             =======+

subscribers worldwide to support the Software Research, Inc. (SR),
eValid, and TestWorks user communities and to other interested
parties to provide information of general use to the worldwide
internet and software quality and testing community.

Permission to copy and/or re-distribute is granted, and secondary
circulation is encouraged, provided that the entire QTN
document/file is kept intact and this complete copyright notice
appears in all copies.  Information on how to subscribe or
unsubscribe is at the end of this issue.  (c) Copyright 2004 by
Software Research, Inc.


                       Contents of This Issue

   o  Taking Software Requirements Creation from Folklore to
      Analysis, by Larry Bernstein

   o  Advances in Software Metrics and Software Processes, Special
      JCST Issue

   o  eValid Success Story Summaries

   o  Handbook of Researcvh on Open Source Software: Technological,
      Economic, and Social Perspectives, Edited by Kirt St. Aman and
      Brian Still

   o  Public eValid Training Course Schedule

   o  Theoretical Computer Science Special Issue: Automated
      Reasoning for Security Protocols.

   o  eValid: Usage Recommendations

   o  Rise and Fall of a Software Startup

   o  International Symposium on Workload Characterization

   o  Workshop on Visualizing Software for Understanding and

   o  QTN Article Submittal, Subscription Information


  Taking Software Requirements Creation from Folklore to Analysis

                          Larry Bernstein
                    Industry Research Professor
                  Stevens Institute of Technology

Large software systems are too often late, costly and unreliable.
Too often the requirements are not well understood or wrong.
Understanding and bounding the requirements in a specification is an
essential step to solving this problem.  As early as 1970 Royce
pointed at that unvalidated requirements leads to unmanageable
projects.  In particular, requirements complexity drives effort
required to build software intensive systems, the time it takes to
build them and their inherent reliability.  Complexity management is
critical and by enhancing existing simulation environments used by
system engineers to formulate alternative system designs, software
engineers can understand the sensitivity of requirements complexity
to the likelihood of producing a workable system.  Model-Driven
Software Realization is a current practice for achieving this
understanding. Combining functional and performance simulations with
sizing and effort estimation efforts leads to a holistic
understanding of feature, form, cost, schedule and trustworthiness.


The performance of new systems capabilities for government agencies
and industry are often examined using simulators.  The simulators
provide insight into how the new capabilities will perform.  The
simulators shed little light on reliability, complexity and other
software engineering aspects of the proposed changes.  The
correlation between the complexities of the proposed capabilities
with that of an earlier system can be used to bound the
trustworthiness, possible schedule and potential costs for
implementing the new capability.  I call this the Lambda Protocol.
It combines software reliability with software sizing theory.  With
performance, reliability, schedule and cost-estimates in hand,
system and software engineers can make essential engineering
tradeoffs as they set a course of action. By the way do we need a
new job category, "Software Intensive Systems Engineering" to
describe that vital work that goes on at the beginning of a software
subsystem before implementation begins?  Some refer to this work as
"Systems Analysis;" but it has a heavy architecture connotation.
The effort here is aimed a coming up with "a" solution to the
problem, later the developers will come up with "the" solution.
Many projects have had the seeds of their failure sown at this
critical boundary.

When customers present ideas that need system solutions, engineers
have an ethical and professional obligation to help them define and
simplify their problem.  You must build the best solution to the
customer's problem, even if the customer does not yet understand how
to ask for it.

Here is one way.  The customer should be encouraged to write a short
prospectus that states the purpose of the system, its value and any
constraints essential to making it useful.  This should not be
confused with a complete set of requirements, which will emerge only
through, simulation, analysis, prototyping and validation with an
iterative process.

                What Can Do Wrong With Requirements

The requirements and design phases are important steps in a software
project.  If these steps are not well done, the quality of the final
product will almost certainly be low.  At this stage, it is good to
maintain a fluid, dynamic synthesis methodology.  Any production
computer investment activity during this interval, including early
coding, imposes a psychological reluctance to change anything
already created.  Unrestrained ability to change is necessary to
developing a quality design.  Investing in prototyping and modeling
at this stage is very helpful, but both customer and designer must
remember that the artifacts produced will not necessarily find their
way directly into the product.

Without an iterative plan for approaching the development of
requirements, the design organization can find itself, months along
on the project, developing the wrong software functions.  For
example, the designer of an ordering system for a grocery could not
guess that suppliers' invoices would not be directly related to
orders because suppliers grouped orders for their own delivery
convenience. The customer would not mention this because "Mildred
always knew how to square things" and nobody ever thought about it.

A formal process has a cutoff point.  This prevents the continuing
stream of requirements changes that can prevent coding and testing
from moving along.  Changes can be made in an orderly way in future
releases after evaluation, but not by altering the requirements
document without formal and elaborate change control procedures.

The sales team can sometimes infect both the customer and the design
organization with the desire to gold plate the product and provide
what is desired rather than what is required.  The design team needs
to be fully aware of this tendency born of enthusiasm and resist it,
without being negative or disheartening.  The attitude must be to
get the core functionality right.

Finally, many design organizations do not have the necessary human
factors specialists to analyze the users' tasks.  Without specific
attention to the people who will use the product, the organization
can develop the wrong user interface.

When some service is especially critical or subject to
hardware/network failure, the application engineer needs to build
software fault tolerance into the application.  These are typical

  - Consistency:  In distributed environments, applications
    sometimes become inconsistent when code in a host is modified
    unilaterally.  For example, the code in one server software
    component may be updated and this change may require sending out
    new versions of the client application code.  In turn, all
    dependent procedures must be re-compiled.  In situations where a
    single transaction runs across several servers a two-phase
    commit approach may be used to keep the distributed databases
    consistent.  If the clients and servers are out of step there is
    a potential for a failure even though they have been designed
    and tested to work together.  The software in the hosts need to
    exchanged configuration data to make sure they are in lock step
    before every session.

  - Robust Security: Distributed application designers need to
    ensure that users cannot inadvertently or deliberately violate
    any security privileges.

  - Software Component Fail Over:  The use of several machines and
    networks in distributed applications increases the probability
    that one or more could be broken.  The designer must provide for
    automatic application recovery to bypass the outage and then to
    restore the complex of systems to its original configuration.
    This approach contains the failure and minimizes the execution
    states of the complex of systems.

The reliability of a system varies as a function of time.  The
longer the software system runs the lower the reliability and the
more likely a fault will be executed to become a failure.  Let R(t)
be the conditional probability that the system has not failed in the
interval [0, t], given that it was operational at time t = 0.   The
most common reliability model is:

        R(t) = e - l t

Where l is the failure rate and the Lambda in our
requirements protocol.  It is reasonable to assume that the failure
rate of a well maintained software subsystem is constant with time
after it is operational, with no functional changes, for 18 months
or more, even though faults tend to be clustered in a few software
components. Of course bugs will be fixed as they are found. So by
engineering the Lambda and limiting the execution time of software
subsystems we can achieve steady-state reliability.  Lui Sha of the
University of Illinois and I define:

        Lambda ( 1 ) = C/Ee

  This is the foundation for the Lambda Protocols and the
reliability model becomes:

        R = e - k Ct/Ee

This equation expresses reliability of a software system in a
unified form as related to software engineering parameters.  To
understand this claim, let=E2=80=99s examine Lambda term-by-term.

Reliability can be improved by investing in tools (e), simplifying
the design (C), or increasing the effort in development (E) to do
more inspections or testing than required by software effort
estimation techniques.  The estimation techniques provide a lower
bound on the effort and time required for a successful software
development program. These techniques are based on using historical
project data to calibrate a model of the form:

        Effort = a + b (NCSLOC) b

        Schedule = d (E)1/3

Where a, b and d are calibration constants obtained by monitoring
the previous developments using a common approach and organization.
NCSLOC is the number of new or changed source lines of code needed
to develop the software system. Barry Boehm=E2=80=99s seminal  books
Software Engineering Economics, Prentice Hall, 1981, ISBN
0-13-822122-7  and Software Cost Estimation with COCOMO II, Prentice
Hall, 2000, ISBN 0-13-026692-2, page 403 provide the theory and
procedures for using this approach.

NCSLOC is estimated by computing the unadjusted function points of a
first cut architecture needed to build the prototype.  With the
number of unadjusted function points in hand a rough estimate of the
NCSLOC for each feature package can be obtained.  The prototype
validates the features in the context of expected system uses.  The
features are grouped into logical feature packages so that their
relative priority can be understood.  The simplified Quality
Function Deployment method (sQFD) is used to understand feature
package importance and ease of implementation.

The NCSLOC can be estimated by multiplying a conversion factor by
the unadjusted Function Points.  Nominal conversion factors are
available on the web or can be derived from previous project

The effectiveness factor measure the expansion of the source code
into executable code.  Software tools, processes and languages have
their own expansion factors that can be combined to characterize the
investment in the development environment.  As you would expect,
higher level languages have higher expansion factors.  Processes
like regression testing, code reviews and prototyping also increase
the expansion factor.  Investments in the expansion factory is like
investing in design and factory tooling to increase hardware

Now, if we only knew the complexity of the feature packages the
effort equation can be used. to find the staffing.  In this
equation, =CE=B2 is an exponent expressing the diseconomies of scale
for software projects, the complexity of the software and factors
related to the developers and their organization.  The COCOMO and
SLIM estimation models can be applied here but be careful to use
average values for terms dealing with the software tools so that
productivity gains are not counted twice. So, if we can estimate
complexity then we can estimate the schedule, reliability and cost
for each feature package.

Estimating Complexity

Modern society depends on large-scale software systems of
astonishing complexity.  Because the consequences of failure in such
systems are so high, it is vital that they exhibit trustworthy
behavior. Trustworthiness is already an issue in many vital systems,
including those found in transportation, telecommunications,
utilities, health care and financial services.  Any lack of
trustworthiness in such systems can adversely impact large segments
of society, as shown by software-caused outages of telephone and
Internet systems.  It is difficult to estimate the considerable
extent of loses experienced by individuals and companies that depend
on these systems.

The primary component of complexity is the effort needed to verify
the reliability of a software system.  Typically reused software has
less complexity than newly developed software because it has been
tested in the crucible of live operation.  But, this is just one of
many aspects of software complexity.  Among other aspects of
software engineering complexity are elements that be mapped onto a
normative scale for each feature package.  The relative complexity
between the feature packages can then be used for estimation.  These
factors are:

 a. The nature of the application characterized as

     1. Real-time, where key tasks must be executed by a hard
        deadline or the system will become unstable or software that
        must be aware of the details of the hardware operation.
        Operating systems, communication and other drivers are
        typical of this software.  (complexity factor = 10)Embedded
        software must deal with all the states of the hardware. The
        hardest ones for the software engineer to cope with is the
        "don't care states."

     2. On-line transactions, where multiple transactions are run
        concurrently interfacing to people or to other hardware.
        Large database systems are typical of these applications
        (complexity factor = 5).

     3. Report generation and script programming (complexity factor
        = 1).

 b. The nature of the computations including the precision of the
    calculations.  Double precision is 3 times more complex that
    single precision.

 c. the size of the component in NCSLOC

 d. the extra effort  needed to assure correctness of a component,

 e. and the program flow using Cyclomatic Complexity Metric

For each feature package estimate the effort, time and reliability.
Weight these factors with the performance and features in the
feature package and then choose the best engineering solution that
fits the customer's original goals written in the prospectus.
Iterate the results with the customer to establish willingness to
pay and potential simplifications before implementation begins.

This material is extracted from the author's new book that contains
specific processes and case histories, including extensive material
from missile systems software.

Reference:  See "Trustworthy Systems Through Quantitative Software
Engineering," Lawrence Bernstein and C.M. Yuhas, Wiley, 2005, ISBN:


        Advances in Software Metrics and Software Processes
Special Issue of the Journal of Computer Science and Technology (JCST)
                 journal -



The creation of large empirical databases of software projects as
the ISBSG has stimulated research on estimation models, metrics and
methods for measuring and improving processes and products. This can
be viewed as a consequence on the maturing of the Software
Engineering discipline, fostered also by the creation of the Guide
to the Software Engineering Body of Knowledge (SWEBOK) that
emphasizes measurement as the foundation for the discipline.

With the inception of such software repositories, many emergent
issues software metrics-related to both products and processes - are
subject to research efforts in Software Engineering. Examples
include metrics for open source development, new metrics for agile
methodologies, influence of team composition and productivity in
effort spent, new methods for measuring functional size such as
COSMIC FFP, data mining of software repositories, visualization
techniques of product and process data, techniques for decision
making such as Bayesian networks and system dynamics, etc.


The main objective of this special issue would be that of providing
a synthesis of recent advances and findings in Software Engineering
measurement and process issues. Topics of interest include (but are
not limited to) the following areas:

  * Software estimates, their validity and the techniques that
    produce models of better quality.
  * Requirements and Functional Size Measurement: techniques,
    validity and their application to new and emerging technologies
  * New metrics and measurement techniques for new paradigms in
    Software Engineering such as aspect-oriented metrics, agile
    methodologies, etc.
  * Process and Product Improvement through measurement.
  * Decision-making in Software engineering including visualization
    techniques for project management
  * Data mining of software engineering repositories and Search
    Based Software Engineering
  * IT Benchmarking. Standards for the collection, comparison and
    validation of metrics
  * Software metrics support tools

                         Submission Details

Please, send your papers to and



                   eValid Success Story Summaries

        eValid's browser-based technology for analyzing and
testing websites has helped hundreds of customers achieve new levels
of accuracy and repeatability in their web applications. Here is a
sampling of success stories about how eValid has helped customers in
novel and unusual ways.

  o Remote Measurement & Reporting:  A popular auction website used
    a specially packaged version of eValid to remote-capture
    detailed user behavior data. The eValid package was deployed to
    1000's of field computers to obtain detailed end-user
    measurement of response time and performance data of how quickly
    the site behaved in a battery of two dozen separate tests.

  o Remote Loading:  Using eValid developed functional tests and a
    battery of DSL-based test machines, eValid LoadTests were able
    to overload the website of a well known document storage and
    manipulation website and identify major system bottlenecks.

  o Download Timing:  A major gaming company used eValid functional
    test monitoring services to analyze the time customers need to
    download their medium-sized (10 MByte) application.  After
    several months they were able to make server adjustments that
    decreased total download times and minimized the variance in
    performance their users has

  o Production Monitoring:  A commercial monitoring firm uses eValid
    transactions on commercial basis to protect customers' website
    investment by assuring availability and response time.  The
    service applies 1000's of plays per day -- over 2 million tests
    per year -- using multiple machines and multiple levels of

  o Search Timing:  eValid scripts were used to establish actual
    "below the fold" timing data for a popular web search engine.
    After analysis of many weeks of data the customer made changes
    in their site structure that significantly improved response
    times and result customer success rates.

  o Three-Tier Monitoring:  A well known e-commerce site uses eValid
    script-based three-tier transaction monitoring to assure
    compliance with a minimum performance criteria ("a simulated
    user has to be able to complete a transaction in less than 120

  o Site Comparison:  On behalf of a European financial news
    journal, eValid website comparisons were done of 150 different
    financial institution websites. The detailed data developed in
    the eValid scans of these websites was used to characterize
    likely user website satisfaction in terms of response time,
    quality and integrity of links, and other matters.

  o VPN Appliance Testing:  A manufacturer of a Virtual Private
    Network appliance has been using eValid to generate large
    amounts of web browsing traffic to confirm the quality and
    reliability of their equipment when applied in "real world"

  o Monitoring Integration:  eValid has been integrated into a
    well-known system monitoring system to provide transaction
    oriented checking and timing support in addition to standard
    forms of network status reporting. Dozens of customers are
    experiencing increased quality of service (QOS) with this
    combined analyzing and reporting technology.  been noting.

  o Custom Browser Development: A firm involved in developing a
    sophisticated network monitoring system needed a customer
    browser to incorporate in their product. eValid built a special
    version for them, branded to their specification and dressed
    with their logos. The eValid-built browser component of their
    product enhanced the value of their business and helped them
    attract a profitable merger with a much-larger monitoring firm.

For complete details on all of these success stories please see the
complete explanations reachable from:


        Call for Chapter Proposals For the Edited Collection
           Handbook of Research on Open Source Software:
          Technological, Economic, and Social Perspectives

              Edited by Kirk St.Amant and Brian Still,
                       Texas Tech University


The decision to purchase or to use a particular software product can
be the choice that results in the success or the failure of an
organization.  For this reason, decision makers at different levels
and in a variety of fields need to understand the factors that
contribute to the effective adoption and use of software products.
Open source software (OSS) is increasingly viewed as a viable option
that can allow a variety or public and private organizations to
achieve their desired goals.  OSS adoption and use, however, is
complicated by the social agendas and the economic goals many
individuals attach to the use of OSS materials.

                       OBJECTIVE OF THE BOOK

The purpose of this handbook is to provide readers with a
foundational understanding of the origins, operating principles,
legalities, social factors, and economic forces that affect the uses
of OSS.  For this reason, the proposed handbook would focus on areas
and concepts essential to understanding when and how various
organizations should adopt OSS.  Chapters would present short
(3,500-5,000 word), focused perspectives on a particular aspect of
OSS adoption and/or use.  Such perspectives would be designed to
help business persons, researchers, and other decision makers make
more informed choices that would facilitate the ease and
effectiveness with which their organization used or interacted with
OSS products.

                          TARGET AUDIENCE

The target audience for this handbook would be five groups that
would use this text for a variety of reasons.

                         RECOMMENDED TOPICS

Prospective subject areas and specific topics for this publication
include, but are not limited to, the following

The Hacker Movement & the Evolution of Open Source Software: Early
Ideas to Current Practices
* The Origins of the Hacker Movement
* Who is an OSS Developer and What is OSS Culture?
* The Browser Wars and the Birth of Mozilla

Benefits and Limitations: Comparing Open Source Software to
Proprietary Software Products
* Overview of Differences Between Open Source Software and
Proprietary Software
* Forking Code and Product Development
* Providing Customer Service in a Not-for-Profit Environment

Tools and Technologies: Selecting the Right Software for the Job
* Content Management and Programs for Organizing Information
* Desktop Publishing: Is OSS a viable alternative to Windows?
* How to evaluate OSS in terms of staffing requirements, customer
support, long-term viability

Business Models and Open Source Software: Balancing Profits with
Ideals * Red Hat: Providing Support and Making Money
* Selling the Server: Apache and the Business of Providing Online
* The Penguin Business: Linus Torvalds and the Rise of Linux

Licensing: Examining the Agreements that Make Use Possible
* Copyleft, OSS Licenses, and the Concept of Ownership
* Licensing Choices: Benefits and Limitations of OSS Licenses
* Licensing and Product Development: What Does the User Need to

The (Il)Legalities of Open Source Software: Code, Copyright, and
* Ripping multimedia: The Copyright Problems Related to OSS Use
* Privacy, Security, and Surveillance: Cryptography and Government
* Whose Code is it Anyway? Reflections on Microsoft vs. the SCO

Going Global: The International Spread of Open Source Software Use
* The EU's Adoption of OSS: A Model for the Future?
* Culture and Coding: Can We Create International Standards for OSS
* Wiring the World: The Role of OSS in Shrinking the Global Digital

Open Source and Education: Expanding Pedagogical Practices
* Uses of Drupal to Enhance Learning
* Claroline and the Packaging of Distance Education for Online
* Blogging as Educational Activity

Government Applications of Open Source Software
* Examining OSS Use at the Federal and the Local Levels
* Getting the Word Out: Government Uses of OSS to Interact with
* The Security Factor: National Defense, OSS, and Terrorist Networks

Expanding the Uses of Open Source Software: Perspectives for the
* Shifting the Model: Will OSS Become a For-Profit Industry?
* Development Practices: How Will the Code Be Created?
* Culture and Code: Projections on OSS and Global Computer Use


Please send inquiries or submit material electronically (Rich Text
files) to the editors at:



               Public eValid Training Course Schedule

The next public eValid training courses in San Francisco have been scheduled.
Space for these events is limited so reserve your seat early.

    o Wednesday, Thursday, Friday: 26-28 October 2005
    o Wednesday, Thursday, Friday: 14-16 December 2005
    o Wednesday, Thursday, Friday: 22-24 February 2005

Options for 1-day and 2-day versions are possible. The complete
curriculum for these training sessions can be seen at:

Sign up for eValid training at:


           Special Issue of Theoretical Computer Science
       on Automated Reasoning for Security Protocol Analysis


This special issues is in connection with The Second Workshop on
Automated Reasoning for Security Protocol Analysis (ARSPA'05) which
took place as a satellite event of ICALP'05, we are guest-editing a
Special Issue of Theoretical Computer Science devoted to original
papers on formal security protocol specification, analysis and

Contributions are welcomed on the following topics and related ones:
The editors are Pierpaolo Degano (Universita` di Pisa, Italy) and
Luca Vigano`     (ETH Zurich, Switzerland).

  - Automated analysis and verification of security protocols.
  - Languages, logics, and calculi for the design and specification
      security protocols.
  - Verification methods: accuracy, efficiency.
  - Decidability and complexity of cryptographic verification
  - Synthesis and composition of security protocols.
  - Integration of formal security specification, refinement and
      validation techniques in development methods and tools.


               eValid -- Some General Recommendations

Here are common eValid problem areas and references to pages that
provide general good-practice recommendations.

* Functional Testing Concerns

  Recording and playing scripts, with validation, is a sure way to
  confirm operation of a web site or web application.

    o Protecting Login Account Names and Passwords

      If you are recording logging into a site, eValid will need to
      make a record of your account name and password.  For the best
      security, you should record login and password details in
      encrypted form.  There's an option in the Script Window Dialog
      to turn on the Encoded Input option that protects critical
      private information.>

    o Initial State Concerns

      Being a fully stateful recording and playback engine, eValid
      is very sensitive to the initial state when playback begins.
      Here are some recommendations about to manage your test's
      Initial State effectively.

    o Session Cookies

      Session cookies are remembered inside eValid and the surest
      way to clear them is to close eValid and launch it again.

    o Modal Dialogs and Modal Logins

      Because of the nature of modal dialogs you may not be able to
      use them directly.  Instead, eValid provides a way to
      construct a reliable script by creating the correct commands
      via the Script Window Dialog.  Check the documentation on
      modal dialog support and on testing modal logins:

    o Testing "Opaque" Objects

      Certain objects are opaque relative to eValid's internal view
      of web page properties, and have to be treated differently.
      These object types include Java Applets and FLASH objects,
      discussed here:

      In addition, it may be helpful to see how to use eValid's
      Application Mode:

* Server Loading Concerns

eValid applies load to a web server using the capability to run
multiple eValid browser instances.

  o Machine Adjustments

    If you want to get more than ~25 eValid copies running at on
    time you probably need to make Machine Adjustments to optimize
    your computer as a server loading engine.

  o Ramping LoadTest Runs

    The most common form of application includes ramping up server
    load so you can study how the server performance degrades due to
    increasing load.

* Site Analysis Concerns

eValid site analysis runs are a powerful way to confirm website

  o Avoiding Logout During Scan

    A common problem during a site analysis scan is that eValid logs
    you out before the scan is done!  This happens when you start
    the scan after logging into a protected area and the eValid
    search spider navigates you to the "logout" page.  The way to
    avoid this is to make sure that your Blocked URLs List includes
    "logout" and "signoff".


                Rise and Fall of a Software Startup
                            By Al Davis

These days it seems that everybody wants to get rich by starting a
new company. And because just about anybody can create a "software
product" with relatively little training and cost, information
technology (IT) companies become the most likely goal. However, the
building of the product (whether in software or any other medium) is
the most trivial and usually the least expensive and least risky
part of any commercial endeavor.

This talk removes some of the mystique surrounding the creation of
IT companies. It explores some of the key aspects of a startup
business, e.g., personnel, finance, productization, marketing,
sales, branding, establishment of appropriate boards, facilities,
and exit strategies. For each aspect, the speaker will describe the
fundamental goals and approaches, and will describe his personal
experiences (some successful and some not so successful) on his own
three startup companies.

                          Author Biography

Al Davis is Professor of Information Systems at the University of
Colorado at Colorado Springs. He was a member of the board of
directors of Requisite, Inc., acquired by Rational Software
Corporation in February 1997, and subsequently acquired by IBM in
2003. He has consulted for many corporations over the past twenty-
seven years, including Boeing, British Telecom, Cadence Design
Systems, Cigna Insurance, Federal Express, Flight Dynamics, Fujitsu,
Great Plains Software, IBM, Loral, McDonald's, MCI, Mitsubishi
Electric, NEC, NTT, Rational Software, Rockwell, Schlumberger,
Sharp, Software Productivity Consortium, and Storage Tek.

For a full biography please go to:

This talk was given at the National ICT Australia, Australian
Technology Park, Bay 15, Sydney -- Map at, Tuesday, August 9,


                             IISWC 2005
The Annual IEEE International Symposium on Workload Characterization
Sponsored by IEEE Computer Society and the Technical Committee on Computer Architecture
                         October 6-7, 2005
                  The Hotel Marriot at the Capitol
                           Austin, Texas

                           FINAL PROGRAM
                    (Paper Titles & Authors Only

General Chair: Lizy John, University of Texas Program Chair: David
Kaeli, Northeastern University

  * Exploiting Program Microarchitecture Independent Characteristics
    and Phase Behavior for Reduced Benchmark Suite Simulation Lieven
    Eeckhout, Ghent University, John Sampson, UCSD and Brad Calder,

  * Detecting Recurrent Phase Behavior under Real-System Variability
    Canturk Isci and Margaret Martonosi, Princeton University

  * A Performance Characterization of High Definition Digital
      Video Decoding Using H.264/AVC Esther Salami, Alex
      Ramirez, Mateo Valero, UPC

  * The ALPBench Benchmark Suite for Multimedia Applications
      Manlap Li, UIUC, Ruchira Sasanka, UIUC, Sarita Adve, UIUC,
      Yen-kuang Chen and Eric Debes, Intel

  * Reducing Overheads for Acquiring Dynamic Memory Traces Xiaofeng
    Gao, Michael Laurenzano, Beth Simon, and Allan Snavely, San
    Diego Supercomputer Center

  * Accurate Statistical Approaches for Generating Representative
    Workload Compositions Lieven Eeckhout, Ghent University, Rashmi
    Sundareswara, University of Minnesota, Joshua J. Yi, Freescale,
    David J. Lilja and Paul Schrater, University of Minnesota

  * A multi-level comparative performance characterization of
    SPECjbb2005 versus SPECjbb2000 Ricardo Morin, Anil Kumar, and
    Elena Ilyina, Intel Corporation

  * Workload Characterization of Biometrics Applications on Pentium
    4 Microarchitecture Chang-Burm Cho, Asmita V. Chande, Yue Li and
    Tao Li, University of Florida

  * Characterization and Analysis of HMMER and SVM-RFE Parallel
    Bioinformatics Applications Uma Srinivasan, Peng-Sheng Chen,
    Qian Diao, Chu-Cheow Lim, Eric Li, Roy Ju, Yimin Zhang, Intel

  * Parallel Processing in Biological Sequence Comparison using
    General Purpose Processors Esther Salami, Alex Ramirez, and
    Mateo Valero, UPC

  * Efficient Power Analysis using Synthetic Testcases Robert H.
    Bell, Jr. IBM and Lizy K. John, University of Texas at Austin

  * Study of Java Virtual Machine Scalability Issues on SMP Systems
    Zhongbo Cao, Wei Huang, and J. Morris Chang, Iowa St.

  * Workload Characterization for the Design of Future Servers Bill
    Maron, Thomas Chen, Duc Vianney, Bret Olszewski, and Steve
    Kunkel, IBM Corp.

  * Understanding the Causes of Performance Variability in HPC
    Workloads David Skinner and William Kramer, NERSC/LBL

  * Understanding Ultra-Scale Application Communication Requirements
    John Shalf, Leonid Oliker, and David Skinner, LBL

  * Characterizing Sources and Remedies for Packet Loss in Network
    Intrusion Detection Systems Lambert Schaelicke, Notre Dame

  * Toward an Accurate Evaluation of Sensor Network Processors Leyla
    Nazhandali, Michael Minuth, and Todd Austin, University of


          VISSOFT 2005: 3rd IEEE International Workshop on
        Visualizing Software for Understanding and Analysis
                         Budapest, Hungary
                        September 25th, 2005
                    (co-located with ICSM 2005)


The VISSOFT 2005 workshop will focus on visualization techniques
that draw on aspects of software maintenance, program comprehension,
reverse engineering, and reengineering.  This event will gather tool
developers, users and researchers in a unique format to learn about,
experience, and discuss techniques for visualizing software for
understanding and analysis.  The goals of the workshop are to work
towards being able to answer the question of what is a good
representation for a given situation, data availability, and
required tasks.

VISSOFT 2005 will feature 19 position papers, interactive tool
demos, and group discussions.  See the workshop site at for details on the program,
registration, and location.


    ------------>>> QTN ARTICLE SUBMITTAL POLICY <<<------------

QTN is E-mailed around the middle of each month to over 10,000
subscribers worldwide.  To have your event listed in an upcoming
issue E-mail a complete description and full details of your Call
for Papers or Call for Participation at

QTN's submittal policy is:

o Submission deadlines indicated in "Calls for Papers" should
  provide at least a 1-month lead time from the QTN issue date.  For
  example, submission deadlines for "Calls for Papers" in the March
  issue of QTN On-Line should be for April and beyond.
o Length of submitted non-calendar items should not exceed 350 lines
  (about four pages).  Longer articles are OK but may be serialized.
o Length of submitted calendar items should not exceed 60 lines.
o Publication of submitted items is determined by Software Research,
  Inc., and may be edited for style and content as necessary.

DISCLAIMER:  Articles and items appearing in QTN represent the
opinions of their authors or submitters; QTN disclaims any
responsibility for their content.

TRADEMARKS:  eValid, HealthCheck, eValidation, InBrowser TestWorks,
STW, STW/Regression, STW/Coverage, STW/Advisor, TCAT, and the SR,
eValid, and TestWorks logo are trademarks or registered trademarks
of Software Research, Inc. All other systems are either trademarks
or registered trademarks of their respective companies.

        -------->>> QTN SUBSCRIPTION INFORMATION <<<--------

To SUBSCRIBE to QTN, to UNSUBSCRIBE a current subscription, to
CHANGE an address (an UNSUBSCRIBE and a SUBSCRIBE combined) please
use the convenient Subscribe/Unsubscribe facility at:


               Software Research, Inc.
               1663 Mission Street, Suite 400
               San Francisco, CA  94103  USA

               Phone:     +1 (415) 861-2800
               Toll Free: +1 (800) 942-SOFT (USA Only)
               FAX:       +1 (415) 861-9801
               Web:       <>