QW'97 Paper and Presentation Abstracts Updated 27 June 1997 |
The 10th International Software Quality Week Technical Program is organized into separate tracks depending on the content of the presentations. Below are brief abstracts for all QW'97 half-day Tutorials, Keynotes, QuickStart Mini-Tutorials, Regular Technical Papers and Vendor Presentations.
Insofar as possible we are providing post-Conference hotlinks to speakers' papers and/or presentations. In some cases, also, you can download papers and/or presentations in their entirety in PostScript or PDF format. See, for example, Danny Faught's downloadable paper on the Exemplar System.
The main objective of this paper is to clearly relate software process improvement and a business goal by using the SPICE model (ISO15504) to prepare an organization for ISO9000 certification. The outcome of the project is a technical guide in software terms for an improvement plan aimed to build an ISO9000 compliant quality system in the organization.
JUMP TO TOP OF PAGE
RETURN TO QW'97 PROGRAM
Using models to describe the behavior of a system is a proven advantage to teams developing tests. Models can be utilized in many ways throughout the product life-cycle. This paper focuses on the testing benefits achieved on real applications and reviews some of the obstacles that have prevented the application of models to software test and present some simple countermeasures.
JUMP TO TOP OF PAGE
RETURN TO QW'97 PROGRAM
Testing expertise is the ability to think critically about technology so as to help project teams ship the right product. This talk presents a mental model of testing that we use at ST Labs to perform professional testing under chaotic conditions, and to help our testers become genuine testing experts.
JUMP TO TOP OF PAGE
RETURN TO QW'97 PROGRAM
INTRODUCTION TO TESTING: Goals, objectives; outcomes.INTEGRATION TESTING OVERVIEW: Goals of integration testing; prerequisites to integration testing.
SYSTEM TESTING OVERVIEW: System testing goals; system tests.
TESTING TECHNIQUES: Control flow testing; why? Control flow testing is: path selection criteria; control-flow testing tools.
TRANSACTION FLOW TESTING: Data Flow testing; domain testing; finite-state testing; syntax testing.
TOOLS AND PERSPECTIVE: Limitations of manual testing; a basic tool kit; coverage certifiers; side benefit of coverage tools; test drivers; test design automation tools; capture/replay; test generators; not enough tools?
PERSPECTIVE ON TESTING: techniques are tool intensive; tool building and buying; realistic payoff projections; tool penetration problem and solution; recap.
JUMP TO TOP OF PAGE
RETURN TO QW'97 PROGRAM
The Year 2000 problem is increasingly in the press, on the internet, and in newsgroups these days -- and it promises to dominate the software industry for the next three years. To give the conference participant a quick head-start and overview of what's what in Y2K, Beizer has prepared this (possibly real?) survey of some 600,000 URL's recently gleaned from the Net. It includes consultants and service vendors, factories and outsourcing, tools and their vendors, and newsgroup discussion summaries. In order to (1) save space in the proceedings (2) save time for participants who have had enough with the conference and are anxious to leave for home and (3) to avoid libel suits, entries have been compressed, merged, fictionalized, and fantisized.
JUMP TO TOP OF PAGE
RETURN TO QW'97 PROGRAM
Software is hard because it has a weak theoretical foundation. Most of the theory that does exist focuses on the static behavior of the software - analysis of the source listing. There is little theory on its dynamic behavior - how it performs under load. Even after we find and fix a bug, how do we restore the software to a known state, one where we have tested its operation? For most systems, this is impossible except with lots of custom design that is itself error-prone. The four measures I use to tell just how good my software systems perform are: reliability, success with unexpected stimuli, true system capacity and the number of service calls. They all measure the system from the viewpoint of its field execution. The study of the dynamic behavior of software addresses these issues. Most of the theory for software technology today focuses on producing the source code and with the exception of testing there is very little work on how it performs in the field. The body of knowledge exploring requirements, estimation, software design, encapsulation, data flow design, decomposition, structural analysis, and code complexity study the static nature of software and concentrates on the source code. I call this the study of 'Software Statics.' Studies in this area have improved software quality and development and should continue.
DOWNLOAD A COMPLETE COPY OF THIS PRESENTATION IN RTF FORMAT
JUMP TO TOP OF PAGE
RETURN TO QW'97 PROGRAM
We present a general method to construct a set of test paths satisfying a selected criterion within a family of control flow and data flow-based coverage criteria. The method builds on the recent concept of a "spanning set" of entities, which is the minimum subset of program entities (e.g., branches or definition-use associations) guaranteeing full coverage.
JUMP TO TOP OF PAGE
RETURN TO QW'97 PROGRAM
Classes (objects) have distinctly different behavior patterns (modes). A mode must be identified to select an effective test strategy. This tutorial presents new approaches for domain/state modeling to characterize class modality and shows how to produce effective test suites from these modules.TOPICS: Simple and complex domains. Class modalities. State machine basics. Modeling classes with state machines: state, preconditions, and postconditions. Domain analysis for classes: public and private domains. Deriving the state transition tree. Sneak-path testing. Test adequacy: conformance, the N+ cover. Considerations for test suite compression.
JUMP TO TOP OF PAGE
RETURN TO QW'97 PROGRAM
This talk will take a look at our own tools and practices and give many real world examples of how you can gain more efficiency from any Test Organization. Beyond the mainstream metrics, attendees should be able to apply solutions to their own teams that will benefit the staff, product and schedule.
JUMP TO TOP OF PAGE
RETURN TO QW'97 PROGRAM
In this work we deal with the problem of testing timing behavior of real-time systems. We introduce several testing criteria based on the information provided by the design description of the system and its derived formal descriptions.
JUMP TO TOP OF PAGE
RETURN TO QW'97 PROGRAM
After the initial shock of an assessment that shows an organization functions at level-1 on the CMM scale, a common response is to set up a Software Engineering Process Group (SEPG) and then expect the SEPG to write organization-wide processes. This frequently does not work, and becomes the source of a second and often more paralyzing shock. In most CMM level-1 organizations, projects: 1) are not aware of the processes they currently use; and 2) resist the imposition of new ones. When their best efforts are resisted, many SEPG members become frustrated and cynical. More serious progress seems farther away than ever, and the organizations sometimes give up.It has long been recognized that it is extremely difficult for organizations to move from CMM level 1 to level 2. In the latest SEI report, 68.8% of assessed organizations are at level 1. Of these 477 assessed organizations, 72 have reported a reassessment. The results: only 33% have moved from level 1 to level 2. The median reported time necessary for moving from level 1 to level 2 is a relatively long 27 months, but varies widely from 11 months to over 50 months. (SEI Process Maturity Profile of the Software Community 1996 Update, April 1996.)
This paper will examine why SEPGs often do not achieve the progress they expect, and what role SEPGs can most appropriately play in a level -1 organization.
JUMP TO TOP OF PAGE
RETURN TO QW'97 PROGRAM
Hewlett-Packard Common Desktop Environment applications can be "localized". Localization means that the user menus and screens appear in a user's native language. Because people are most comfortable using software when the user messages are in familiar language, localization is an important part of selling software in non-English-speaking nations. It is not unusual for an application to be localized into a dozen languages. Software in our lab is developed originally with English user messages. The UNIX mechanism for localizing software is to store all user message strings in a file called a message catalog. A copy of this message catalog is then sent to a language expert, known as a localizer, who translates the English message strings into the target language, such as French or Japanese. The localizer then returns the translated message catalog to our lab. When the application is run with the translated message catalog in place, all user menus and messages should appear in the target language. A fundamental problem with testing localizations is that expertise and capability are typically separated in space. The localizers have the language expertise, but they do not have access to the software to run the application. The development team has expertise in the application, but we lack the language expertise to detect subtle localization problems.
JUMP TO TOP OF PAGE
RETURN TO QW'97 PROGRAM
Methods such as Orthogonal Defect Classification and the Butterfly model have changed the face of what software engineering can do. Defect reduction, cost control and productivity enhancements that stretch the limit of what the industry has witnessed in the past two decades. One example broke the stone wall the classical methods reached at a factor of three to four, and continued to sustain a steep drop over four years down to a factor of 50. Accompanied by cycle time and productivity increases. We are embarking on to a new world -- where some of the dreams that the stalwarts of software engineering once visioned, are being realized by our technology today.
JUMP TO TOP OF PAGE
RETURN TO QW'97 PROGRAM
Increasingly software systems are being written as distributed systems. The advent of Java and the World Wide Web will no doubt accelerate this trend. Distributed systems have many benefits, such as improved performance and flexibility. Unfortunately, it is much more difficult to test such systems. Distributed systems are inherently nondeterministic; executing the same program the same inputs might produce different results depending on such things as the system load and scheduling algorithm. There has been considerable work on developing testing platforms for distributed systems that primarily monitor behavior and support re-execution or post-mortem examination of intermediate results. As with all testing approaches, the results are only as good as the selected test data. The next input may expose an error in the system.An alternative approach is to verify that certain properties must be guaranteed to hold for any execution of the system. This approach has the advantage that is independent of the choice of test data or any of the factors that could effect execution, such as the system load. Formal verification techniques have not been widely successful in the past primarily because they require a complete specification of the behavior, which is usually very difficult to write and often as prone to errors as the code itself, because they require considerable amount of expertise on the part of the user, and because they are computationally expensive. We propose an alternative approach, which uses data flow analysis to verify, automatically and efficiently, user-specified properties of distributed systems.
DOWNLOAD A COPY OF THE SLIDES IN POSTSCRIPT
JUMP TO TOP OF PAGE
RETURN TO QW'97 PROGRAM
This case study illustrates the dynamics of applying test automation to a major project development within a large MIS environment. It demonstrates how the utilization of technology and structured processes contributed to project objectives and overall success of the effort. All critical success factors from test methodology to tool selection and utilization to test data planning and construction are discussed.
JUMP TO TOP OF PAGE
RETURN TO QW'97 PROGRAM
A suite of flexible tools and an adaptable testing process were developed at the Distributed Active Archive Center at NASA's Goddard Space Flight Center. The tools were created to test dynamic science software on a rapidly evolving system. The system will archive and distribute a massive flow of satellite data.
JUMP TO TOP OF PAGE
RETURN TO QW'97 PROGRAM
This tutorial describes the application of certain "formal" methods of software specification and design to real-world applications. The particular formalism used is the functional model of software behavior, embodied in the "box structures" specification and design method. While this formal method is not as well-known as others, it has distinct advantages in usability and in its ability to scale-up to larger applications.The key to success with any formal method is to treat formalism as a problem-solving tool. Thus, formal methods must be embedded within a reasonable software-engineering process if they are to succeed. The formal methods should also offer their users the ability to tailor the level of rigor to meet specific project needs: application developers should not be required to use the same degree of care as medical-device developers.
The intended audience for this tutorial includes developers, technical leaders, and managers who want to begin using formal methods to improve quality and life-cycle productivity.
JUMP TO TOP OF PAGE
RETURN TO QW'97 PROGRAM
The growing world-wide acceptance of ISO 9001 presents DoD software and system suppliers, familiar with Version 1.1 of the Software Engineering Institute (SEI) Capability Maturity Model (CMM), with requirements to comply with ISO 9001. At the same time, commercial software and system suppliers, already familiar with ISO 9001 and ISO 9000-3 are considering the value of the CMM as a tool for process definition and improvement.Software Engineering Process Group teams and Quality System implementors must reconcile at least two models (ISO 9001 and the CMM) - both of which describe business practices and which are often portrayed as either in conflict or as mutually exclusive alternatives.
Many of the misunderstandings associated with ISO registration for software engineering organizations are based on interpretations and successful applications of the standard for manufacturing organizations. Even with the guidance available in ISO 9000-3 and TickIT, it is not always clear to auditors and implementors how ISO 9001 applies to the current, wide variety of software development paradigms.
JUMP TO TOP OF PAGE
RETURN TO QW'97 PROGRAM
Many systems on the Internet are vulnerable to sniffers, spoofers, spammers, scanners, and sundry other threats. This talk will characterize the extent of the threat, methods and tools of attack, and countermeasures for preventing or detecting attacks. The role of software quality will be discussed.
JUMP TO TOP OF PAGE
RETURN TO QW'97 PROGRAM
This paper presents a statistics-based integrated test environment for distributed applications. To address two main issues, when to stop testing and how good the software is after testing, it provides automatic support for test execution, test development, test failure analysis, test measurement, test management and test planning.
JUMP TO TOP OF PAGE
RETURN TO QW'97 PROGRAM
A generic tableau algorithm, which is the basis for a general customizable method for producing oracles from temporal logic specifications, is described in [5]. The algorithm accepts semantic rules as parameters for building the semantic tableau for a specification. Parameterizing the tableau algorithm by semantic rules permits it to easily accommodate a variety of temporal operators and provides a clean mechanism for fine-tuning the algorithm to produce efficient oracles. In this paper, we report on an implementation of the algorithm and on our experience with its use.
JUMP TO TOP OF PAGE
RETURN TO QW'97 PROGRAM
Managing software development projects has become the critical linchpin in delivering quality software systems. It can also be the recipe for failure. The gap between the best software development practice and the average practice is quite large -- and it is software management that largely serves to bridge that gap. Come hear a practical management-level presentation for improving productivity, assuring quality, lowering production costs, adhering to proven process-level activities, dodging the inherent pitfalls, and avoiding software malpractice litigation. Hear about risk management techniques, management tools, measuring productivity and level of effort, and most important, practical principles and guidelines for managing the most important resource -- your people. The intent is to provoke a lively discussion.
JUMP TO TOP OF PAGE
RETURN TO QW'97 PROGRAM
Business is war, and quality is a battleground! The military metaphor provides rich analogies for reasoning about how to optimize QA. This session shows how you can leverage your resources to find bugs earlier (reducing development cost) and slash failures in the field (reducing maintenance and operational costs). The fast-paced discussion shows how you can sabotage your efforts if you ignore the five overarching fundamentals and seven underlying principles. Bonus: Practical strategies and tactics you can use right away.
JUMP TO TOP OF PAGE
RETURN TO QW'97 PROGRAM
This talk will present the Hewlett-Packard Convex Division's experiences with building a reliability test for the Exemplar SPP-UX operating system. It will touch on issues associated with testing operating systems, testing a large parallel supercomputer architecture, and practical aspects of testing software reliability in a market-driven short-lifecycle environment.
DOWNLOAD A COMPLETE COPY OF THIS PAPER IN POSTSCRIPT
JUMP TO TOP OF PAGE
RETURN TO QW'97 PROGRAM
To reduce the cost of regression testing, we use test cases for which the program path traverses the modified code. Current methods determine such modification traversing test cases. However, a modification-traversing test does not guarantee that executing the program with this test we get a different output, i.e. the test is not necessarily modification-revealing. To overcome this shortcoming we introduce a new method based on program mutation which requires not only that the original testing criterion be satisfied, but additionally every modification in the program should be tested so that we can differentiate the original and the modified program. In this way, the reliability of the software increases during the maintenance.
JUMP TO TOP OF PAGE
RETURN TO QW'97 PROGRAM
Our revolutionary process eliminates the tradeoffs inherent in the long-term support of critical systems: frequent improvements vs. stability, rigorous testing vs. quick delivery and early testing vs. stable testing area. We now routinely place critical modifications in our customers' hands in 24 hours without compromising testing rigor.
JUMP TO TOP OF PAGE
RETURN TO QW'97 PROGRAM
Quality Contracting and Proposal Specification and Analysis. Some specific techniques for dealing with the selling and buying ends of technical quality specification. Making Contracts Testable.Specification Analysis: A method for analyzing a specification to identify the real objectives, constraints, designs, assumptions.
Specification Organization Rules and Procedures: A method for organizing proposal and contract information which helps clarity, responsibility, costing, testing, presentation and other functions.
Specification Language: An innovative practical language for specification which promotes clarity, configuration management, risk control, costing evolutionary project planning, testability.
Specification Quality Control: Inspection, Entry Exit numeric quality control, corporate learning mechanisms: so you don't have to repeat mistakes.
JUMP TO TOP OF PAGE
RETURN TO QW'97 PROGRAM
(Presentation Abstract To Be Supplied)
JUMP TO TOP OF PAGE
RETURN TO QW'97 PROGRAM
"Software engineers" are not, strictly speaking, engineers in the legal sense. However, people whose work is developing computer systems think of themselves as professionals on the engineering model. They have technical degrees, which more and more are being granted by departments of computer science and engineering, departments within a college of engineering. They view their work as requiring practical and theoretical skills in the sciences and mathematics, just as engineers do. And they willingly assume responsibility for the practical utility of their products, as they are used in the real world -- they expect to make those products work. Finally, they resist with some passion attempts to "dumb down" their work, to make it more mechanical, less creative, something a "technician" could do.At the same time, software engineers put up with a great deal that "real" engineers seldom have to face. The irony is that negative aspects of their jobs violate the very principles of professional engineering they would like to assume as their own. It is often said that as a very young profession, software development cannot yet expect to be like the older-established branches of science and engineering. This philosophical consolation may be offered because no one knows what to do about some problem, whose solution must await the fullness of time. But in more and more cases, we all know very well what can and should be done -- it just isn't being done. We accept the status quo with what amounts to a little grumbling, when it would be more appropriate to be outraged.
Here are some "facts of life" for software engineers, that make a shambles of any pretense that their work is an engineering profession:
There is a long and honorable tradition of technical workers saving their bosses from the bosses' folly, and an occasional heroic episode in this tradition is exciting and rewarding. But as a way of professional life, panic mode has little to recommend it.
- They are expected to "manage their own work," thus doing their managers' jobs in addition to their own, but without the control that a manager requires to be effective.
- They work with tools and techniques that are far from well known "best practices", not to mention accepted "state of the art".
- As a consequence, they have to work too hard, under too much pressure, to "get it to work" despite mismanagement, poor practices and inadequate tools.
DOWNLOAD A COPY OF THIS PAPER IN POSTSCRIPT
DOWNLOAD A COPY OF THE SLIDES IN POSTSCRIPT
JUMP TO TOP OF PAGE
RETURN TO QW'97 PROGRAM
This paper describes use of the Software Sigma Predictive Engine to predict escaping defects and rework costs in delivered software. The engine utilizes defect containment history data and estimated size (delivered source lines of code), estimating defect levels and associated rework costs and providing confidence intervals for these estimates. Current efforts are underway to validate the engine against actual project data. Preliminary rework cost data for defects generated in some stage, but discovered in a later stage, has been collected from 14 software projects. Results suggest that the cost avoidance of software defects during software development can easily run into the millions of dollars. A principal observation is that a mature software process can produce substantial cost avoidance by reduction of defect rework.
JUMP TO TOP OF PAGE
RETURN TO QW'97 PROGRAM
Prior to purchasing space, aircraft, or communications systems, the Air Force operationally tests them to ensure they meet the specified needs of their users. The Air Force Operational Test and Evaluation Center (AFOTEC) conducts these operational tests for the Air Force. Since many modern systems rely heavily on software, the Air Force requires software to be mature before beginning these lengthy, expensive tests. Software maturity is a measure of the software's progress toward meeting documented user requirements. The software analysis division at AFOTEC uses software problem, change, and failure tracking data to help demonstrate when software has sufficiently met requirements and fixed identified problems. The concept and evaluation are simple, but rarely considered by developers and acquirers prior to AFOTEC involvement.
JUMP TO TOP OF PAGE
RETURN TO QW'97 PROGRAM
One of the enduring problems in testing is to know when it is reasonable to discontinue testing. Widely used methods often involve some form of coverage measure, such as branch and statement coverage. There are several problems with such methods. Except at the lowest levels of the testing effort, it is often necessary to settle for less than full coverage. It is common to accept 80-85% coverage. However there is no meaning given to these figures, other than it better to achieve the highest level possible, and the empirical observation that it is often very difficult and expensive to construct test sets that achieve higher levels of coverage. The problem is even worse at the level of systems testing where any form of branch or statement coverage is too detailed and is not practical, leaving the testers with no quantitative measure of test data adequacy.More sophisticated coverage approaches involve coverage elements that may be syntactically possible, but which are infeasible in the sense that there is no data that causes them to be executed. In this case it is necessary to determine which items are infeasible in order to know what level of coverage is logically possible, and this may be very difficult.
A new approach to coverage is described statistical test coverage, which attempts to solve these problems. It can be used to give meaning to the concept of partial coverage, to avoid the infeasible coverage element problem, and to allow the use of sophisticated coverage measures that would be otherwise difficult to implement.
JUMP TO TOP OF PAGE
RETURN TO QW'97 PROGRAM
An object model is a representation of an object-oriented system that abstracts away many implementation issues, and is thus a suitable starting point for an object-oriented design. Object models are essentially an extension of entity-relationship diagrams, originally devised for database design but reworked in the context of objects. The popularity of object modeling is increasing dramatically. Object models form the core of methodologies such as Shlaer-Mellor, OMT and Booch. Rational's Unified Modeling Language, a candidate standard that combines features from OMT, Booch and Jacobson likewise employs entity-relationship style object models as its central notation. A variety of tools have been developed for object models that support some degree of code generation. They are also capable of rudimentary consistency checks. But there is currently no commercial technology available for detailed analysis of object models prior to coding that would allow efficient and early detection of errors.
JUMP TO TOP OF PAGE
RETURN TO QW'97 PROGRAM
The main body of American law governing the quality of commercial computer software is contract law, especially the Uniform Commercial Code (UCC). The UCC is being revised, with minimal input from the software development community, to provide new rules for the development, sales, and support of software. In this tutorial Kaner reviews the current state of software contract law, the UCC proposals, and the UCC maintenance process (which you can be involved in!).
JUMP TO TOP OF PAGE
RETURN TO QW'97 PROGRAM
There are no silver bullets when it comes to software development. However there are processes, technologies and management approaches which can lead to dramatic improvements in software quality. Forty of these techniques, applied over a four year period, helped IBM's largest software development site save millions of dollars by reducing field defects by 46%, by reducing service costs by 20%, by increasing customer satisfaction by 14% and by boosting productivity by 56%. This presentation describes each technique briefly and summarizes benefits and implementation pitfalls.
JUMP TO TOP OF PAGE
RETURN TO QW'97 PROGRAM
Early prediction of quality would enable developers to make improvements at a cost-effective point in the life cycle. This paper focuses on the number of faults as one aspect of the life cycle. We study a sample of modules representing over 1.2 million lines of code, taken from a large telecommunications system. Using discriminant analysis for classification of fault-prone modules we measured software design attributes and categories of variables relative to software reuse. The case study shows that reuse history can be used to improve quality methods and that this model produced more accurate predictions than other models.
JUMP TO TOP OF PAGE
RETURN TO QW'97 PROGRAM
Many recognized software process assessment and improvement methods ultimately converge to continuous improvement frameworks. However often initial improvement steps are massive and long (12-18 months), while it is necessary to show results as early as possible to capture the attention of the organization and to keep things focused on process improvement. The inconsistency can be removed with incremental process improvement.
JUMP TO TOP OF PAGE
RETURN TO QW'97 PROGRAM
High integrity software systems are often used in environments where lack of a response can cause an accident or result in severe financial loss due to unnecessary shutdown. This paper describes "Unravel" which cna assist evaluation of high integrity software by using program slices to extract computations for detailed examination. The too currently can evaluate ANSI C and is designed to be extensible to other languages.
JUMP TO TOP OF PAGE
RETURN TO QW'97 PROGRAM
The derivation of a software inspection process description language which can be used as input to inspection support tools will be described, along with a new inspection support tool which implements the language. The use of the language and tool to implement and run a specific inspection technique will be demonstrated.
JUMP TO TOP OF PAGE
RETURN TO QW'97 PROGRAM
Project management must decide when a product is ready to ship, a decision based on a mass of inevitably imperfect information including an estimate of a products' bugginess. The test manager has two responsibilities: to report correct information and to report the available information correctly. This paper examines the situation around the test manager when asked to present the "public face" of the test team, i.e. in a status meeting.
JUMP TO TOP OF PAGE
RETURN TO QW'97 PROGRAM
Can the effort used for test writing be used to write requirements instead? A simple requirements specification system combined with an automated test system holds great promise. This paper examines one such combination.
JUMP TO TOP OF PAGE
RETURN TO QW'97 PROGRAM
This paper reports on applied research work carried out with two industry partners. Grey box testing is defined as a mix of black and white box testing. It is when the testers do not fully understand the application or the code (black box testing), but their testing is enhanced by using flowgraphs indicating where coverage has occurred or is missing (white box testing).
JUMP TO TOP OF PAGE
RETURN TO QW'97 PROGRAM
Software-reliability-engineered testing (SRET) is testing that uses reliability objectives and profiles of system use to speed software development while assuring the necessary reliability. It is unique in providing a way to engineer and manage testing to get the right balance among reliability, timely delivery, and competitive cost for software-based systems. Operational profiles are quantitative descriptions of how software-based systems are expected to be used in the field. This presentation will outline how you apply operational profiles in software-reliability-engineered testing. It will focus on how operational profiles are developed, including identifying initiators of operations, creating operations lists, determining operation occurrence rates, and deriving operation occurrence probabilities. Then it will show how operational profiles are applied in test preparation, especially selection of test cases.
JUMP TO TOP OF PAGE
RETURN TO QW'97 PROGRAM
Operational profiles are quantitative descriptions of how software based systems are expected to be used in the field. This tutorial will show how operational profiles can be used in testing, and will focus on how operational profiles are developed, including indentifying initiators of operations, creating operations lists, determining operation occurrence rates, and deriving operation occurrence probabilities. Some applications of use of operational profiles are presented, along with details on how to use these notions in your own testing project.
JUMP TO TOP OF PAGE
RETURN TO QW'97 PROGRAM
The 1992 DoD Software Technology Strategy set the objective to reduce software problem rates by a factor of 10 by the year 2000. The National Software Quality Experiment is being conducted to benchmark the current state of software quality, and as a mechanism to obtain core samples of software product quality. A micro-level national database of product quality data is being populated by a continuous stream of samples from industry, government and DoD sources. The centerpiece of the experiment is the Software Inspection Lab where data collection procedures, product checklists, and participant behaviors are packaged and cataloged for operational project use.
JUMP TO TOP OF PAGE
RETURN TO QW'97 PROGRAM
Over the maintenance life of software intensive systems, regression tests can become geriatric and outdated. As enhancements and improvements are made, regression tests must, from time to time, be updated as needed. One approach to updating a suite of regression tests was to develop usage models of the system based on a wealth of field collected data reflecting actual customer usage. This data was used to derive a set of user and usage classes providing a set of operational scenarios. The regression tests were then updated or modified to reflect the actual usage of the system.
JUMP TO TOP OF PAGE
RETURN TO QW'97 PROGRAM
This paper will review the planning and managing of test automation in a multi-platform test environment. This will include the development of an automation test methodology, the necessary building blocks for automation, the framework around automation, and some practical considerations for the design and development of a test management system for automation.
JUMP TO TOP OF PAGE
RETURN TO QW'97 PROGRAM
The 10X Testing Program offers software professionals a systematic way to improve both the productivity and the quality of testing by tenfold. A number of companies have employed the combination of methods, standards, tools, measurements, and training prescribed by the 10X Testing Program, and now those companies are reporting 10X gains.This tutorial shows how you can: (a) Assess your current testing process (b) Design a new 10X testing process (c) Gain support for the new process from managers and engineers alike, (d) Implement your new 10X process, and (e) Evaluate the impact and benefits of 10X testing.
JUMP TO TOP OF PAGE
RETURN TO QW'97 PROGRAM
This talk describes application of the StP/T and TestWorks product suites to a complete life-cycle sequence beginning with formal specifications, OMT-type design, code realization with RAD methods, metric and static analysis, followed by systematic automated and coverage-based repeatably thorough testing.
JUMP TO TOP OF PAGE
RETURN TO QW'97 PROGRAM
Although it is well understood to be a generally undecidable problem, a number of attempts have been made over the years to develop systems to automatically generate test data. These approaches have ranged from early attempts at symbolic execution to more recent approaches based on, for example, dynamic data flow analysis or constraint satisfaction. Despite their variety (and varying degrees of success), all the systems developed have involved a detailed analysis of the program or system under test and have encountered problems (such as handling of procedure calls, efficiently finding solutions to systems of predicates and dealing with problems of scale) which have hindered their progress from research prototype to commercial tool. The approach described in this paper uses the ideas of Genetic Algorithms (GAs) to automatically develop a set of test data to achieve a level of coverage (branch coverage in this case). Using GAs neatly sidesteps many of the problems encountered by other systems in attempting to automatically generate test data.
JUMP TO TOP OF PAGE
RETURN TO QW'97 PROGRAM
Organizations are spending more time and money on their testing and process efforts. But how do you know whether or not the testing and process improvement efforts are paying off? One way is to define specific metrics to measure the effectiveness of your process, and the efficiency with which the process is carried out.
JUMP TO TOP OF PAGE
RETURN TO QW'97 PROGRAM
Many software organizations are adopting some sort of test automation strategy. In our haste to improve the number of tests we run, however, we may easily forget that the quality of the tests is at least as important as the number of tests we run. Even when we are conscious of this, however, we struggle with measuring our test quality.This technical tutorial will discuss definitions of test quality, as well as test quality measurements. The tutorial will present a survey of test coverage measurements, including both code coverage analysis and requirements coverage analysis. Some supporting tools will also be discussed briefly for software written in the C language.
JUMP TO TOP OF PAGE
RETURN TO QW'97 PROGRAM
Adopted by the International Standards Organization in 1987, ISO-9000 is being promulgated worldwide as the quality management standard. To date there are over 50,000 registrations in the UK and about the same number in the rest of the world. But, has ISO-9000 improved quality? John Seddon will argue that it has not. He will present evidence that shows how registration to ISO-9000 has resulted in suboptimization of performance in every organization he has studied. What accounts for what has happened in the name of quality? Seddon will deal with the favorite argument of the standard's defenders: "It's OK if you do it right!" He will show how a number of influences have worked together to result in what he calls "...a tragedy in the history of quality."
JUMP TO TOP OF PAGE
RETURN TO QW'97 PROGRAM
This tutorial will introduce the emerging International Standard on Software Process Assessment (ISO/SPICE). The tutorial will describe the background and objectives of an International Standard on Software Process Assessment. The different elements of the Standard will be discussed. The Reference Model of processes and process attributes will be described; the requirements for conducting a software process assessment will be identified and explained; the exemplar assessment model and guidance for conducting software process assessments will be described to illustrate the rationale behind the Standard. The relationship between the Standard and existing models and Standards such as the Capability Maturity Model, ISO 9001 and ISO 12207 will be explained. Initial results from Phase 2 of the SPICE trials will also be provided.This tutorial is intended for software managers and engineers involved in software process improvement actions in their organization. They may already use the CMM for assessment and improvement or ISO 9001 to develop and manage their Quality Management System. They will be interested in understanding more about the standardization of software process assessment and how their existing activities can be improved.
JUMP TO TOP OF PAGE
RETURN TO QW'97 PROGRAM
Some of the largest fears that companies have in looking at the ISO 9000 family of standards is the cost and impact of implementation. There is a fear that there will be a large expense in documenting business process and in keeping the document current. There is a concern that a large "Quality" organization will need to be developed to monitor and track compliance, which will have a major impact upon the corporate culture. This presentation will discuss some ways in which work flow can be used to support the introduction and maintenance of ISO 9000.
JUMP TO TOP OF PAGE
RETURN TO QW'97 PROGRAM
During the last decade the scientific QA community has successfully worked out a variety of practical methods to assure the quality of both software development processes and the products themselves. Yet some projects are "QA resistant" for a variety of reasons, at least one of which is the influence of an experienced, self-confident project manager, i.e. a real software entrepreneur. This paper suggests an activity model called IQAM (Improve Quality After Milestone) to enhance quality in "heroic" projects.
JUMP TO TOP OF PAGE
RETURN TO QW'97 PROGRAM
An object request broker (ORB) is middleware in a multi-tier client/server computing environment. Testing an ORB requires facing the intrinsic multi-platform nature of the system, and the test system must be able to account for many possible variations of platforms. This paper describes a strategy to thoroughly and automatically test an ORB, which requires creation of a special purpose test system.
JUMP TO TOP OF PAGE
RETURN TO QW'97 PROGRAM
Test data selection criteria can be defined based on specification only information (i.e. functional criteria) or on implementation based criteria (i.e. structural criteria). This paper concentrates on the data flow based structural criteria, which is complicated in "realworld" programs by the occurrence of pointers, arrays, and structured variables. A new approach that promises to overcome some of these objections by combining work of Moldonado and a new conservative data flow model is presented.
JUMP TO TOP OF PAGE
RETURN TO QW'97 PROGRAM
This presentation will give practical insight in how static and dynamic (coverage) analysis was applied in our organization, two effective testing techniques that are too often overlooked in the software testing practice. The presentation will demonstrate the benefits and quantitative results on quality that were achieved from applying these techniques and tools on real-life projects.
JUMP TO TOP OF PAGE
RETURN TO QW'97 PROGRAM
This paper will describe the performance testing process and discuss key issues and considerations during each phase of the process. It will also compare and contrast the general classes of testing tools available to support the process (simulation tools vs. emulation tools, actual clients vs. virtual clients, etc.). The emphasis will be that each type of tool has its strengths and the selection of the "best" performance testing tool depends on your test objectives and your particular testing environment. Actual customer case studies will be used to highlight issues and discuss "lessons learned."
JUMP TO TOP OF PAGE
RETURN TO QW'97 PROGRAM
Without sufficient data from many projects in various domains, researchers have difficulty identifying the types of problems for which new development and assurance methods are needed. The results of a Call for White Papers issued by the National Institute of Standards and Technology [NIST] revealed a strong need for an objective organization to address these problems [NIST95]. Consequently, NIST has begun a project called error, fault, and failure data collection and analysis (EFF).The purposes of EFF are to help industry assess software system quality by collecting, analyzing, and providing error, fault, and failure data and by providing data collection and statistical methods and tools for the analysis of software systems. The EFF will also provide more knowledge about the errors and faults that are not detected nor prevented by current methods.
JUMP TO TOP OF PAGE
RETURN TO QW'97 PROGRAM
The most important analytical method to assure the quality of real-time systems is dynamic testing. Existing software test methods concentrate mostly on testing for functional correctness. They are not specialized in the examination of temporal correctness which is also essential to the correct functioning of real-time systems. A new approach for testing the temporal behavior of such systems is based on genetic algorithms and has been successfully applied in various experiments.
JUMP TO TOP OF PAGE
RETURN TO QW'97 PROGRAM
This paper, illustrating the best practices that contributed to a quality product delivery, is a case study of the initial release of the HP Fortran 90 compiler (http://www.hp.com/go/hpfortran). It will describe the nature of the product, the quality criteria used, what worked, what did not work, and what we learned from the process. The main factors for success involved: (1) standardized project methodologies; (2) a firm QA plan; (3) incorporating off the shelf components; (4) effective configuration management; (5) solid coding practices; (6) a successful beta program.
JUMP TO TOP OF PAGE
RETURN TO QW'97 PROGRAM
The concept of a firewall has proven useful for functionally designed software, especially for integration regression testing in the presence of small changes (See references Abdullah [1], Leung [5,6] and White [7]). The use of a firewall here is for functional design, and if small changes in large software systems occur, and after new tests are designed for these changes, then the firewall will indicate where regression tests are required to be applied for those small changes in order to keep regression errors from spreading throughout the system.In this presentation, the concept of a firewall is applied to object-oriented software with the same objective in mind. Given small changes in an object-oriented system, find those areas of the system which must be regression tested before these changes are integrated into the rest of the system. In order to be specific in this presentation, C++ will be used to demonstrate the technique, but the method should be applicable to any language implementing object-oriented analysis and design methodology.
JUMP TO TOP OF PAGE
RETURN TO QW'97 PROGRAM
Companies beginning or struggling with test automation by applying some recommendations shared by others who have "...been there, done that, got the tee-shirt..." Kerry Zallar shares some of his experiences gained personally and those shared by others who have been successful in automating their testing. Key points include planning test automation through the software development life cycle, perceiving test automation as a software development effort itself, and focusing on the maintainability of the resulting test suite.
DOWNLOAD A COPY OF THIS PAPER IN PDF
JUMP TO TOP OF PAGE
RETURN TO QW'97 PROGRAM
The paper presents the successfully implemented concept of an integrated tool set for the complete management of tests for complex digital circuits, programmable hardware systems, and simulators. The tool set covers the generic specification, automatic generation, execution and verification of test programs and the generation of test reports. It effectively supports quality management according to ISO 9001.
JUMP TO TOP OF PAGE
RETURN TO QW'97 PROGRAM
Testing is the key to solving the "Year 2000 problem" -- ensuring that applications do not fail over the millennium change. This tutorial teaches how to:JUMP TO TOP OF PAGE
- Test an application for Year 2000 compliance
- Find potential failures
- Indicate where corrective changes should be made
- Test corrective changes that have been made
- Certify that an application is correct
- Assess external and package software for Year 2000 compliance
- Acceptance test Year 2000 changes by outsourcing service vendors
RETURN TO QW'97 PROGRAM
Panelists: Jonathan Beskin, RST Corporation; Edward Miller, Software Research, Inc.: Neil Smith, Visual Java Test Manager, Microsoft; Tony Wasserman, Methods & Tools
The Java language and associated technology has taken software development by storm. Discussion of Java at conferences and in the press has mostly focused on technical details and novel applications. This panel will look at Java from a different point of view: In what respects does Java obviate bugs? Does Java development present any new sources of trouble or cause for concern?A key question is whether Java is a "paradigm shift" or simply yet another programming language. Java assumes server-centric computing, so it has implications for both infrastructure as well as software. If all this is a paradigm shift, then can we expect a corresponding improvement in Java software quality? If it is not, then does it compare favorably with established technologies?
Panelists will consider:
- How much of Java's promise is due to its fat-server/skinny client approach?
- What quality problems/strategies are the same with Java development?
- What quality problems/strategies are different with Java development?
- To what extent must we follow the Java model to realize its benefits?
- How should we allocate our quality portfolio (e.g., 25% prototyping, 50% unit test, 25% system test)?
- What kind of development environment (e.g., web servers, multiple client OS, multiple browsers) is necessary?
- What kind of apps are good (bad) candidates for Java implementation?
- What can we do to improve testability?
- What are reasonable expectations for quality costs and benefits with Java development?
Panel Members: Bob Binder, RBSC Corporation (Panel Chair)
Jonathan Beskin, RST Corporation
Edward Miller, Software Research, Inc.
Neil Smith, Visual Java Test Manager, Microsoft
Tony Wasserman, Methods & Tools
JUMP TO TOP OF PAGE
RETURN TO QW'97 PROGRAM
This panel discusses architectural strategies for designing automated test suites using GUI test tools, that are more robust to UI changes and are better documented. This work is the result of a private 15-20 person working group led by panelist Cem Kaner.
JUMP TO TOP OF PAGE
RETURN TO QW'97 PROGRAM