sss ssss      rrrrrrrrrrr
                      ssss    ss       rrrr   rrrr
                     sssss     s       rrrr    rrrr
                     ssssss            rrrr    rrrr
                      ssssssss         rrrr   rrrr
                          ssssss       rrrrrrrrr
                    s      ssssss      rrrr  rrrr
                    ss      sssss      rrrr   rrrr
                    sss    sssss       rrrr    rrrr
                    s  sssssss        rrrrr     rrrrr

         +===================================================+
         +======= Testing Techniques Newsletter (TTN) =======+
         +=======           ON-LINE EDITION           =======+
         +=======            December 1997            =======+
         +===================================================+

TESTING TECHNIQUES NEWSLETTER (TTN), Online Edition, is E-mailed monthly
to support the Software Research, Inc. (SR)/TestWorks user community and
to provide information of general use to the worldwide software quality
and community.

Permission to copy and/or re-distribute is granted, and secondary
circulation is encouraged by recipients of TTN-Online provided that the
entire document/file is kept intact and this complete copyright notice
appears with it in all copies.  (c) Copyright 1997 by Software Research,
Inc.

========================================================================

INSIDE THIS ISSUE:

   o  Holiday Greetings from SR, SR/Institute

   o  Quality Week 1998, San Francisco, California, May 1998: Call for
      Participation Reminder

   o  Pacific Northwest Software Quality Conference, A Conference Report
      by Larry Bernstein

   o  WebSite Content Quality Issues: A Survey for TTN-Online Readers

   o  Call for Nominations, Stevens Award

   o  Property Based Testing: A New Approach to Testing for Assurance,
      by George Fink and Matt Bishop (Part 2 of 2)

   o  Compendium of Software Engineering Tools: Software Methods and
      Tools (SMT) WebSite

   o  European Software Measurement Conference - FESMA'98, May 1998,
      Antwerp, Belgium

   o  IEEE-SA Software Engineering Standard

   o  2nd EuroMicro and Software Maintenance & Reengineering Conference
      (CSM98), Florence, March 1998

   o  Mailing List Policy Statement

   o  TTN Submittal Policy

   o  TTN SUBSCRIPTION INFORMATION

========================================================================

                           HOLIDAY GREETINGS!

We would like to send a very special message to everyone in the
SR/TestWorks/Quality Week community, including testers and quality users
at our ~1000 installed sites, representing upwards of 20,000 TestWorks
users, some 5000 subscribers to TTN-Online, and the 1100 or so people
who attended one of our two Quality Week events in '97:

                          We hope you have a
                    Happy and Joyous Holiday Season
                            and we wish you
                         All the Best for 1998!

========================================================================

                 REMINDER: QW'98 CALL FOR PARTICIPATION
            San Francisco, California, USA:  26-29 May 1998

You can see the QW'98 call for participation at the conference website:
http://www.soft.com/QualWeek/QW98.

You can also download a PostScript or PDF version of the document from
the site.

Paper or presentation proposals are due 16 January 1998.

========================================================================

                 A Conference Report by Larry Bernstein
                (President, National Software Council):
             Pacific Northwest Software Quality Conference

I recently attended the Pacific Northwest Software Quality Conference in
Portland Oregon on October 28 and 29, 1997.  It was quite successful.
It led to better understanding of the state-of-the-art and the state-
of-the-practice of software processes.  Almost 500 people attended with
many sessions overflowing.  The exhibits were limited and need to be
better.  The publisher's tables attracted many visitors.

Wolfgang B. Strigel of the SOFTWARE PRODUCTIVITY CENTRE, INC.
(www.spc.ca) reported on a survey of 34 companies representing 2086
software professionals.  The top issue facing these companies is their
ability to recruit good people.  There are too many jobs chasing too few
people.  Surveyed companies felt that they were in control of their
technology.  As expected, they faced short schedules, expanding
requirements and growth challenges.  He pointed out that by the year
2000, 600,000 open software jobs in the US would not be filled by
graduates of US universities.  The study showed that while project
management skills still need improvement, product management skills are
missing in most companies.  Wolfgang called for a Masters program in
Software Business Administration.

Teri Rout echoed the need to improve software-engineering education and
reported on the success of an undergraduate software-engineering program
in Australia.

The state of Oregon is funding a $2.25 m effort to establish a Master's
program in software-engineering in the greater Portland metropolitan
area.  Prof. Dick Hamlet is part of the committee setting up this
program.

Robert Glass reviewed how software processes were actually being used.
He heralded the benefits of code reviews and was disappointed in the
results obtained with formal methods.  When challenged he saw the
pitfalls of formal code inspections as defined by Fagan but stuck to his
view of the merits of reviews.  He expanded his comments to include
requirements and design reviews and welcomed the addition of
architecture reviews.  Reviews are a key best current software
engineering practice.  Mr. Glass implored software professionals to
gather data to support the claims of any recommended processes. The
efforts of NIST to collect such data through its Standard Reference
Material for Software Error, Fault, Failure Data Collection & Analysis
Repository were welcomed.  For more information contact Dolores Wallace
at dwallace@nist.gov.

Capers Jones and Barry Boehm were liberally quoted throughout the
conference.  Successful projects were seen as those most strongly
associated with those that:

a. estimate costs using automated tools
b. use automatic software project management tools
c. have EFFECTIVE quality control
d. use fewer better people
e. continually improve their processes

George Yamamura of Boeing gave a world class talk describing how
Boeing's Space and Defense System's Division achieved a level 5 CMM.  He
reported a rate of 5.3 defects introduced for every 100 lines changed
and showed how using this data Boeing reduced the number of defects in
their final product by catching errors earlier in their development
process.  He found defects highly correlated with personnel practices.
Groups with 10 or more tasks and people with 3 or more independent
activities tended to introduce more defects into the final product than
those more focused on the job at hand.  He pointed out that large
changes were more error prone than small ones, with changes of 100 words
of memory or larger being considered large ones.  The most startling
data is the 0.918 correlation between defects and personnel turnover
rates.

When Boeing moved from a Level 3 CMM to a Level 5 CMM one they obtained:
a. A 2.4x  productivity increase
b. 83% fewer defects
c. High customer satisfaction measured in higher profits  through a
   formal award-fee program
d. Reduced development costs, and
e. Improved employee moral, so good that he observed an 8% employee
   return rate.

Comments can be sent to Larry Bernstein by Email at
"lbernstein@worldnet.att.net".  Read about the National Software Council
on http://www.CNSoftware.org.

========================================================================

                WEBSITE CONTENT QUALITY ISSUES: A SURVEY

Most TTN-Online readers use the WWW in some capacity -- keeping up on
the news, searching for information, downloading software, etc.  This
survey addresses the question: "What About Quality Issues in the WWW?"

The main concern of the questions here is about the quality of the
content websites you visit -- not about the relative quality of the
browser you're using.  (Relative quality of various browsers is the
subject of a later survey, planned for early 1998.)  As websites grow
and grow, and have increasing technical content (i.e. complex and
dynamic HTML pages, Java applets, complex Cgi-Bin scripts in a variety
of languages), they become more and more difficult to manage. "Failures"
(of any kind) become more and more difficult to detect (to debug), and
more costly to the website owner.

We invite your answers to the survey questions below.  Send responses by
Email to qw@soft.com.  We'll summarize all of the responses we receive
in TTN-online in an upcoming issue.

Here are the questions:

 o Have you experienced website content failures that have affected
   you?  (For example, broken links, garbled pages, grossly incorrect
   or offensive information, etc.)

 o Have you heard about content-related failures at websites?  What
   were the consequences of these failures?  Who or what was injured
   and what were the consequences?

 o What do you think is the hardest problem you would face if you
   were a website manager?

 o Relative to Quality, what do you think is the weakest part of a
   website's content?

 o What do you think makes a "good" website?  What makes a "bad" one?
   How would you tell the difference?

 o Can you think of a software quality problem in websites that
   everyone in the community ought to be concerned about?

 o Do you have another concern about website quality?  Explain?

Remember, send your responses directly to qw@soft.com.  Look for the
survey summary in a future issue of TTN-Online.

========================================================================

                        CALL FOR NOMINATIONS

                +-----------------------------------+
                |          STEVENS LECTURE          |
                |  ON SOFTWARE DEVELOPMENT METHODS  |
                +-----------------------------------+

                        WAYNE P. STEVENS AWARD

Nominations are sought for the 1998 and 1999 presentations of the
Stevens Award and Lecture.  The purpose of the Stevens Lecture is to
advance the state of software development methods and enhance their
continuing evolution.

The award recipient is recognized for outstanding contributions to the
literature or practice of methods for software development.  The lecture
presentation focuses on advancing or analyzing the state of software
methods and their direction for the future.

Past recipients are:   1995   Tony Wasserman     (USA)
                       1996   David Harel        (Israel)
                       1997   Michael Jackson    (United Kingdom)

This award lecture is named in memory of Wayne Stevens (1944-1993),
a highly-respected consultant, author, pioneer, and advocate of the
practical application of software methods and tools.  His 1974 article
"Structured Design" was the first published on the topic and has been
widely reprinted from the IBM Systems Journal.  He was the author of the
books Software Design: Concepts and Methods (Prentice-Hall Intl, 1991)
and Using Structured Design (Wiley, 1981). His last article "Data Flow
Analysis and Design" appears in the Encyclopedia of Software Engineering
(Wiley, 1994).  Stevens was the chief architect of IBM's application
development methodology.

The Stevens Lecture is presented by IWCASE, the international
association which sponsors the Software Technology and Engineering
Practice (STEP) conference.

The 1998 lecture will be presented at the
6th Reengineering Forum conference, 9-11 March 1998, in Florence Italy
(see http://www.reengineer.org/ref98/).

The 1999 lecture will be presented at the STEP conference,
date to be announced.

Nominations for the Stevens Award may be submitted by letter, fax, or
electronic mail and must include a description of the contribution of
the nominee (up to 3 pages), citations of key contributions to the
literature on software methods, and contact information for both the
nominee and the nominator.  Anyone may submit a nomination.

Send nominations by 5 January 1998 to:

Stevens Lecture Committee               fax  +1-781-272-8464
IWCASE                                  email  iwcase@iwcase.org
P.O. Box 400
Burlington, MA  01803  USA


Questions and requests for information should be directed to the
chair of the Stevens Award Committee:

Elliot Chikofsky,  DMR Consulting Group
phone +1-781-272-0049;  email e.chikofsky@computer.org

========================================================================

                 PROPERTY-BASED TESTING: A New Approach
                        to Testing for Assurance
                             (Part 2 of 2)


                              George Fink
                            Sun Microsystems
                       (george.fink@eng.sun.com)

                                  and

                              Matt Bishop
                    University of California, Davis
                        (bishop@cs.ucdavis.edu)

   NOTE: This paper appeared in ACM SIGSOFT's Software Engineering
   Notes, Volume 22, No. 4, July 1997 and is reprinted here with
   permission of the ACM.

ABSTRACT:  The goal of software testing analysis is to validate that an
implementation satisfies its specifications. Many errors in software are
caused by generalizable flaws in the source code. Property-based Testing
(property based testing) assures that a given program is free of
specified generic flaws. Property based testing uses property
specifications and a data-flow analysis of the program to guide
evaluation of test executions for correctness and completeness.

(Part 1 of this article appeared in the November 1997 issue of TTN-
Online.)

Selecting/identifying a property

The first step in property-based testing is to choose a property or
properties from a selection of generic properties, and to write any
specific program-specific properties to test. Property specifications
are written in TASPEC. In the case of ftpd, a generic property is used.

A portion of the property library is a set of properties which describe
a security model. One high-level property specification requires that
authentication occur before any permissions are granted:

        authenticated(uid) before permissions_granted(uid).

The library also contains low-level definitions of the predicates
"authenticated" and "permissions_granted", shown in Figure 3. In TASPEC,
actions within curly braces occur when the condition (either a program
location or a logical predicate about the specification state) before
the curly braces occurs. For example, the setuid(uid) location, when
executed, causes the "permissions_granted" predicate to be true in the
specification state.

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

   location func setuid(uid) result 1
       (assert permissions_granted(uid);}

   location func crypt(password, salt)result encryptpwd{
       assert password_entered(encryptpwd;}

   location func getpwnam(name)result pwent{
       assert user_password(name,
           pwent -> pw_passwd, pwent -> pw_uid);
           }

   location func sircmp(s1,s2) result 0{
       assert equal(s1, s2);}

   password_entered(pwd1)^
       user_password(name, pwd2, uid)^
       equal(pwd1,pwd2){assert authenticated(uid);}

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

          Figure 3: Property specification for authentication.

The authentication property can be selected by hand. Optionally, an
automatic tool could compare location specifiers (code templates) in the
property specifications with the source code of ftpd to evaluate the
relevance of properties in the library. The definition of
"permissions_granted" involves the setuid system call/footnote {"Setuid"
is used here as an amalgam of the many different permissions-setting
system calls (seteuid is actually used by ftpd)}.  The property, then,
forms a pre-condition for the setuid system call. Since ftpd contains
setuid, the authentication property can be automatically chosen as an
important property for which to test.

                      Static Analysis and Slicing

The Tester's Assistant statically analyzes the source code for ftpd.
Ftpd contains about 3000 lines of C code, 1700 lines of which are
machine-generated by lex and yacc.  The static analysis produces a
data-flow graph for ftpd. The ftpd data-flow graph has 6148 nodes and
31912 edges. The data-flow graph is used in other steps of the process:
program instrumentation, coverage evaluation, additional test case
generation, and correctness evaluation.

Next, slices of ftpd are derived from the data-flow graph. First the
slicer generates a slice of ftpd with respect to the selected
authentication property. The human tester inspects the slice manually,
but even in the sliced code (represented in Figure 2) the flaw is subtle
enough that it goes unnoticed. At this point the human tester can
request additional slices based upon any other criteria that can aid in
the tester's understanding of ftpd.

                        Program Instrumentation

The Tester's Assistant produces an alternate version of ftpd to execute
during testing. The alternate version has the same functionality as
ftpd, but has additional data-gathering modules, so that coverage and
correctness can be evaluated from test results. Every section of source
code corresponding to a location specifier in the property has code
added to record if and when the section of code is executed. The added
code is used later in correctness evaluation. The assignments in the
source code that are significant for coverage evaluation are also tagged
to record when the assignments are executed. The Tester's Assistant
instruments only the slice relative to the selected authentication
property. The instrumented source is then compiled, at which point ftpd
is ready to be executed.

                        Initial Test Executions

The instrumented ftpd is executed several times with various test data.
There are three ways to generate test data for ftpd: First, use any
available test data that was used in initial testing and debugging.
Second, have the analyst generate simple test data from a description of
ftpd's functionality. Finally, if there are any specifications of ftpd,
the specifications can be used to generate test data. Generating test
data from specifications is not specifically part of property-based
testing, but other testing methodologies contain the necessary
algorithms [CRS96, DF93].

The first method is simplest, because no extra work is required and the
test suite is likely to be fairly complete. However, if these test cases
aren't available, the analyst creates some test cases by reading the
ftpd manual page.  Figure 4 shows some sample test cases.


- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

                   Test Case 1

   user 
   pass 
   retr filename

                   Test Case 2

   user 
   pass 
   retr filename (no access permissions)

                   Test Case 3

   user 
   pass 
   cwd directory
   retr filename1 filename2

                    Test Case 4

   user 
   pass 
   list

   "User" enters a name, "pass" enters a password, "retr"
   retrieves a file, "cwd" changes directory, and "list"
   lists a directory.

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

              Figure 4: Four initial test cases for ftpd.

The test executions are then evaluated for coverage and correctness.
None of the four executions result in a violation of the authentication
property. However, coverage evaluation reveals that ftpd has not been
completely tested, so more test cases must be found and executed.

                          Coverage Evaluation

While ftpd executes with each given test data, the coverage
instrumentation writes a file recording the execution history of the
slice. The execution history indicates which path in ftpd was executed.
The initial test executions yield several execution histories. The
execution histories are compared with the coverage metric. Property
based testing uses iterative contexts. Each context is an ordered
sequence of assignments, which defines a sub-path of the program. For a
history to match a context, the assignments must be executed in order
with no intervening and interfering assignments.  The contexts are
generated using static analysis and the data-flow graph,

For the (abstracted) fragment of ftpd source:

   (1) logged_in = 0;
   (2) while(1)
   (3)  switch(cmd) {
   (4)   user: name = read();
   (5)      pass = read();
   (6)      if(match(name,pass))
   (7)       logged_in = 1;
   (8)      break;
   (9)   get:  if (logged_in)
   (10)       setuid(name);
   (11)  }

the contexts required include

        {{4,5,6,10},{4,5,6,4,10},{4,10,4,5,6}}.

The execution histories are compared with the set of contexts to see
which histories match which contexts. The unmatched contexts are
coverage gaps.

The execution histories from the four initial test cases are

        {{4,10,4,5,6},{4,5,6,10},{4,5,6,10},{4,5,6}}.

The second and third execution histories are identical because their
behavior relative to the property specification is identical. The
context {4,5,6,4,10} is a coverage gap in the initial test data, and
corresponds to the flaw in ftpd.

                         Additional Test Cases

In order to complete the coverage metric, additional executions of ftpd
are necessary, with different test data that addresses the coverage
gaps. This paper does not present a method to produce this additional
test data automatically, and the problem is not trivial.

A human tester produces additional test data by examining the contexts
not covered and the code corresponding to the contexts. For the contexts
and code in ftpd, there is a close correspondence between input
statements and statement numbers in the uncovered context (Statements 4
and 5). The uncovered context {4,5,6,4,10} is executed by the the
following test script:

   user 
   pass 
   user 
   pass 
   retr filename

Correctness evaluation of this execution detects that the flaw exists in
ftpd.

Future versions of the Tester's Assistant may be able to automate some
of the steps in generating test data for gaps in coverage using
techniques based upon symbolic execution [DO91].

                         Correctness Evaluation

During each test execution, a file records the activated TASPEC
primitives. The TASPEC evaluation engine processes this data and
compares it with the property specification. If the data violates the
property specification, then the human tester is informed that the test
caused an error condition.

During processing of the correctness records for the additional test
case given above, the correctness monitor registers that there is a
correct authentication of user 1.  No authentication of user 2 is
registered, because the password match fails. When the file retrieve
action occurs, the "permissions_granted" property is registered.
However, the retrieve occurs with the permissions of user 2, for whom
there is no authentication. Therefore, the additional test case causes
an error condition, so ftpd fails the property based testing with
respect to the authentication property.

                   Applications to Computer Security

Assuring that computer programs and systems are secure is an important
and difficult problem. Security flaws are still being discovered in
computer programs that have been in use for many years. Many of the
flaws are caused by the same basic recurring faults [Spa92]. For
example, the Internet worm [Spa89] exploited errors in Unix network
programs.  Examination of the flaws which caused the errors revealed
them to be of an elementary nature.

It is time for a concerted effort to try to prevent such flaws from
occurring. Therefore, an appropriate initial application of property
based testing and the Tester's Assistant is Unix security, specifically
for network programs. Security is a good application of property based
testing because the parts of programs that relate to security are small,
and generic security properties can be precisely expressed program-
independently with TASPEC.

                            Security Issues

Networked systems cause special security problems because any
communication or authentication between networked systems must be
performed entirely through an exchange of information. The exchange of
information is limited by the network structure as well; many networks
in use today are asynchronous, and make no strict delivery guarantees
for information packets. Problems with asynchrony are complicated by
different implementations for the same service protocol, which may have
different performance. Therefore, network services must be flexible in
their implementation of communication and authentication services. This
flexibility can sometimes be exploited and become a source of security
problems, adding to security problems arising from bad design or
implementation.

Network services with Unix involve the client/server model. The server
runs on a host machine, and regulates access to information on the host
by communication with client processes on other machines on the network.
The server can do its task in one of two ways: by forking off a server-
end client process to handle commands, or by doing all the work
internally. In either case, the server will be interacting with the host
system in a number of ways -- reading/writing files, etc.

Most network servers are privileged programs; they are run with root
privileges on the host machine. Unix has a coarse-grained tri-level file
protection scheme. If the access level for a process cannot fit into
this scheme, the process must be given root level permissions, which
override the scheme. Network services typically do not fit into the
tri-level scheme and are given root permissions, even though root
permissions are used in only one particular function of the program.
Therefore, the server is given excess privileges, which become fertile
ground for exploitable vulnerabilities.

               Using Property Based Testing for Security

Formal testing with property based testing can validate security
properties of software and thus produce secure systems. Security-related
code is often only a small part of a program's functionality. Property
based testing focuses on code relevant to security functionality in
great detail, and so efficiently validates the security-related part of
the program without testing the whole program.

Property based testing provides a methodology for testing narrow
properties of source code. It produces a specific and absolute metric
for successful testing with respect to those properties. A successful
test validates that properties are not violated; if these properties
form the security policy for the system, then the system is secure.

Property-based testing uses a security model of the system, as well as a
library of generic flaws (such as [LBMC93, Spa92]) specified in TASPEC,
to produce a test process, whereby the target program can be certified
to be free of certain types of flaws.

                   Concluding Remarks and Future Work

Property based testing defines a formalized framework for testing. With
property based testing tools, a tester can produce a validation that a
program satisfies given properties.  Other aspects of the process have
not yet been well defined.  How are properties selected? How is it
determined that the properties represent a complete model of a program's
possible failures?

Ongoing research into property based testing at University of
California-Davis includes:

o Tool development: automating more property based testing techniques
  and incorporating them into the Tester's Assistant, and distributing
  the tools to gain a wider evaluation base.
o Property specification: specifying generic flaws and features of
  protocol implementations of TCP and NFS, for example.
o Evaluation of iterative contexts: performing empirical comparisons
  between iterative contexts and other similar metrics such as L-
  contexts [Las90].
o Case studies: gaining more experience using the methodology of
  property based testing and understanding how it can be applied to
  different problems.

                            Acknowledgements

Part of this work has been supported by DARPA, under contract
USNN00014-94-1-0065.

                               REFERENCES

[AI91]  Derek Andrews and Darrel Ince.  Practical Formal Methods with
        VDM.  McGraw-Hill, 1991.

[CER]   CERT advisory CA-88:01.ftpd.hole.

[CPRZ89]Lori A. Clarke, Andy Podgurski, Debra J. Richardson, and Steven
        J. Zeil.  A formal evaluation of data flow path selection
        criteria.  IEEE Transactions on Software Engineering,
        15(11):1318-1331, November 1989.

[CRS96] Juei Chang, Debra J. Richardson, and Sriram Sankar.  Structural
        specification-based testing with ADL.  Submitted to ISSTA 1996
        as a Regular Paper, 1996.

[DF93]  Jeremy Dick and Alain Faivre.  Automating the Generation and
        Sequencing of Test Cases from Model-Based Specifications,
        chapter 4, pages 268-284.  First International Symposium of
        Formal Methods Europe Proceedings.  Springer-Verlag, 1993.

[Dil90] Antoni Diller.  Z: An Introduction to Formal Methods.  John
        Wiley & Sons, 1990.

[DO91]  Richard A. DeMillo and A. Jefferson Offutt.  Constraint-based
        automatic test data generation.  IEEE Transactions on Software
        Engineering, 17(9):  900-910, September 1991.

[FHBL95] George Fink, Michael Helmke, Matt Bishop, and Karl Levitt.  An
        interface language between specifications and testing.
        Technical Report CSE-95-15, University of California, Davis,

[Fin95] George Fink.  Discovering security and safety flaws using
        property-based testing.  PhD thesis, UC Davis, 1995.

[Fin96] George Fink.  Iterative contexts, a complete and practical
        data-flow coverage metric.  In preparation, 1996.

[FKAL94] George Fink, Calvin Ko, Myla Archer, and Karl Levitt.  Towards
        a property- based testing environment with applications to
        security-critical software.  In Proceedings of the 4th Irvine
        Software Symposium, April 1994.

[FL94]  George Fink and Karl Levitt.  Property-based testing of
        privileged programs.  In Tenth Annual Computer Security
        Applications Conference, pages 154-163.  IEEE Computer Society
        Press, December 1994.

[GH93]          John V. Guttag and James J. Horning.  Larch: Languages
        and Tools for Formal Specification.  Texts and Monographs in
        Computer Science.  Springer-Verlag, 1993.

[Hel95] Michael Helmke.  A semi-formal approach to the validation of
        requirements traceability from Z to C.  Master's thesis, UC
        Davis, September 1995.

[Las90] Januss Laski.  Data flow testing in STAD.  Journal of Systems
        Software, 12:3-14, 1990.

[LBMC93] Carl E. Landwehr, Alan R. Bull, John P. McDermott, and William
        S. Choi.  A taxonomy of computer program security flaws, with
        examples.  Technical Report NRL/FR/5542-93-9591, Naval Research
        Laboratory, November 1993.

[Nta84] Simeon C. Ntafos.  On required element testing.  IEEE
        Transactions on Software Engineering, SE-10(6):795-803, November
        1984.

[Ric94] Debra Richardson.  TAOS: Testing with analysis and oracle
        support.  In Proceedings of the 1994 International Symposium on
        Software Testing and Analysis, August 1994.

[RW85]  Sandra Rapps and Elaine J. Weyuker.  Selecting software test
        data using data flow information.  IEEE Transactions on Software
        Engineering, 11(4):  367-375, April 1985.

[Spa89] Eugene H. Spafford.  The internet worm: Crisis and aftermath.
        Communications of the ACM, pages 678-687, June 1989.

[Spa92] Eugene H. Spafford.  Common system vulnerabilities.  Workshop on
        Future Directions in Intrusion and Misuses Detection, 1992.

[Wei84] Mark Weiser.  Program Slicing.  IEEE Transactions on Software
        Engineering, SE-10(4):352-375, July 1984.

========================================================================

               Compendium of Software Engineering Tools:
                Software Methods and Tools (SMT) WebSite

TTN Readers are invited to visit the new Software Methods and Tools
(SMT) WebSite:  http://www.methods-tools.com where several hundred
products from over 40 companies are described.  SMT President Dr. Tony
Wasserman has used his extensive experience in the software development
and software engineering field to select and analyze an effective subset
from among thousands of software products available.

========================================================================

          European Software Measurement Conference - FESMA'98
                     6-8 May 1998, Antwerp, Belgium

The FESMA'98 conference will focus on the application of software
metrics in project control, estimating and risk management, emphasizing
its added value for the benefit of the business processes and aiding the
software quality improvement process.

Additional information can be found on http://www.kviv.be/ti/fesma.htm

Contact: Bruno Peeters, FESMA - Federation of European Software Metrics
Associations

Phone: +[32] (2) 222-6449

email: bruno.peeters@gemeentekrediet.be


========================================================================

                 IEEE-SA Software Engineering Standard

To All Interested Parties:  Progress at last.  The very first
strawperson draft of an IEEE-SA standard for quality grades for software
component source code packages is now available at http://www.izdsw.org.

This draft uses a triple to give a quality grade along three dimensions:
(1) construction; (2) assembly; and (3) V&V.  The base level grade is
(1,1,1).  In this initial draft the highest grade is (9,4,8).

Please pass this information around the net.

Frank Ackerman
Email: A.F.Ackerman@ieee.org
Chair, IEEE-SA

========================================================================

                          Preliminary Program

                  2nd EUROMICRO CONFERENCE on SOFTWARE
                     MAINTENANCE AND REENGINEERING
                                  and
                        6th REENGINEERING FORUM

                   March 8-11, 1998, Florence, Italy

                Sponsored by: Euromicro, REF, IEEE (TSE)
                      In cooperation with: CESVIT
         Patroned by: Univ.Firenze, AIIA, DSI, AICA, TABOO,...

                      Please visit our Home Page:
               http://www.dsi.unifi.it/~nesi/csmr98.html

               Email Contact: csmr98@aguirre.ing.unifi.it

P r e l i m i n a r y   P r o g r a m

Keynote Speakers:
  * Continuous Engineering for Industrial Scale Software Systems.
    Prof. Weber

Session: Tool Architecture
  * Architecture and Functions of a Commercial Software Reengineering
    Workbench (H.M. Sneed - D)
  * Control Flow Normalization for COBOL/CICS Legacy System (Chris
Verhoef,
    Alex Sellink, Mark van cen Brand - University of
    Amsterdam - NL)

Session: Data Reengineering
  * A Generic Approach for Data Reverse Engineering Taking into
    Acoount Application Domain Knoledge (Sonia Ayachi Ghannouchi -
    ENSI (National School for Computer Studies) - Tunisie)
  * A strategy for reducing the effort for database schema
    maintenance (Donatella Castelli - Istituto di Elaborazione
    dell'Informazione - I)

Session: Business Information Technology
  * An Organizational Framework for Mass-Customized Business
    Application (Petra Ludwig, Thomas Kaufmann, Harald Liessmann -
    Bavarian Research Center for Knowledge-Based Systems - D)

Session: Year 2000 Problem
  * Variable Classification Technique for Software Maintenance and
    Application to The Year 2000 Problem (Keiko Kawabe, Akihiko
    Matsuo, Sanya Uehara - Fujitsu Laboratories Ltd. JP -  Akira
    Ogawa - Fujitsu Ltd. - JP)

Session: Program Understanding
  * On Constructing a Tool to Verify Programs for Processors Built
    in Machines (Tomoya Ohta, Norihiro Matsumara,  Yukihiro Itoh-
    Shizuoka university - JP)

Session: Reuse and Object Oriented Techniques
  * A Dependence-Based Representation for Concurrent Object-Oriented
    Software Maintenance (Jianjun Zhao - Fukouka Institute of
    Technology - Jingde Cheng, Kazuo Ushijima -JP)
  * OOA Metrics for the Unified Modeling Language (Michele Marchesi
    - Universita' di Cagliari - I)
  * Protection Reconfiguration for Reusable Software (Christian
    Damsgaarg Jensen - Universite Joseph Fourier - F)

Session: System Assessment
  * Towards Mature Measurement Program (Frank Niessink, Hans van
    Vliet - Universiteit Amsterdam  NL)
  * A Tool for Process and Product Assessment of C++ Applications
    (F.Fioravanti, P.Nesi, S. Perlini - Univ Florence - I)
  * Software Testability Measurement derived from Data Flow Analysis
    (Pu-Lin Yeh, Jin-Cherng Lin-Tatung Institute of Technology -
    Taiwan)

Session: Software Architecture
  * Assessing Architectural Complexity (Rick Kazman - Carnege Mellon
    University - USA - Marcus Burth - University of Mannheim - D)
  * Architecture recovery for Software Evolution (Juan C. Duenas,
    William Lopes, Juan A. de la Puente - Universidad Politecnica
    de Madrid - E)

Session: Requirements and Specification Evolution
  * Requirements Evolution in the Midst of Environmental Change A
    Managed Approach (Wing Lam - University of Hertfordshire - UK)
  * A Method for Assessing Legacy Systems for Evolution (Jane Ransom,
    Ian Sommerville, Ian Warren - Lancaster University - UK)
  * System Specification Reengineering Using the SpecView Tool
    (Tereza G. Kirner, Rogeria C. Gratao - Federal University of
    Sao Carlos - BR)
  * A Tool Supporting the Re-Design of Legacy Applications (Katja
    Cremer - RWTH Aachen- D)

Session: Maintenance Effort
  * Modeling Maintenance Effort by means of Dynamic System
    (F.Calzolari, G. Antoniol, P. Tonella - IRST - I)
  * Improving Defect Removal Effectiveness for Software Development
    (Hareton K. N. Leung - The Honk Kong Polytechnic University)
  * The Extract-Transform-Rewrite Cycle A Step Towards metaCARE
    (Bernt Kullbach - University of Koblenz - D)

Session: Logic Programming, Telecommunication
  * A Metric Suite for Concurrent Logic Programs (Jianjun Zhao -
    Fukouka Institute of Technology - Jingde Cheng, Kazuo Ushijima -
    Kyushu University - JP)
  * Identifying Fault Prone Modules An Empirical Study in
    Telecommunication System (Sung-Back Hong, Kapsu Kim - ISDN
    Call Processing Section, ETRI - KR)

Papers in OPEN FORUM Sections
  * DBFW A Simple DataBase Framework for the Evaluation and
    Maintenance of Automated Theorem Prover Data (Peter Jakobi,
    Andreas Wolf - Technische Universitat Munchen D)
  * Reengineering of Distributed Systems Using Formal Methods
    (Stephan Kleuker - University of Oldenburg D)
  * Metrics-based Evaluation of Object-Oriented Software
    development Methods (Reiner R. Dumke - Erik Foltin -
    University of Magdeburg - D)
  * Supporting Software evolution using Zones (Cathy Waite, Ray
    Welland, Malcom Atkinson - University of Glasgow - UK)
  * RENAISSANCE, A Method To Migrate From Legacy To Immortal
    intecs Sistemi S.p.A. - I)
  * Visualization of Differences between Versions of
    Object-Oriented Sofware (Jochen Seemann, Jurgen Wolff von
    Gudenberg - Wurzburg University - D)
  * Amber Metrics for the Testing & Maintenance of Object-Oriented
    Designs (Jill Doake, Ishbel Duncan - University - UK)
  * Tailoring the Process Model for Maintenance and Reengineering
    (Sara Stoecklin, Deidre Wiliams  - Florida Agricultural &
    Mechanical University - Peter Stoecklin - PCSA Inc. - USA)
  * Toward a systematic object-oriented transformation of a Merise
    analysis (Isabelle Borne, Annya Romanczuk, Frederique Stefani -
    Ecole des Mines de Nantes - F)
  * Object Evolution through Model Evolution (Roland T.Mittermeier,
    Helfried Pirker, Dominik Rauner-reithmayer - Univeritat
    Klagenfurt - A)
  * A sound and Pratical Approach to the Re-Engineering of Time
    Critical System (H.Zedan - H. Yang - De Montfort University - UK)
  * Reengineering a Computerized Numerical Control Towards
    Object-Oriented (F. Butera, B. Fontanella, P. Nesi, M. Perfetti -
    ELEXA S.r.l. - I)
  * Software Artifacts Reuse and Maintenance An organizational
    Framework (Claudine Toffolon, Salem Dakhli - Paris-Dauphine
    University - F)


========================================================================

              TTN-Online -- Mailing List Policy Statement

Some subscribers have asked us to prepare a short statement outlining
our policy on use of E-mail addresses of TTN-Online subscribers.  This
issue, and several other related issues about TTN-Online, are available
in our "Mailing List Policy" statement.  For a copy, send E-mail to
ttn@soft.com and include the word "policy" in the body of the E-mail.

========================================================================
------------>>>          TTN SUBMITTAL POLICY            <<<------------
========================================================================

The TTN Online Edition is E-mailed around the 15th of each month to
subscribers worldwide.  To have your event listed in an upcoming issue
E-mail a complete description and full details of your Call for Papers
or Call for Participation to "ttn@soft.com".

TTN On-Line's submittal policy is as follows:

o Submission deadlines indicated in "Calls for Papers" should provide at
  least a 1-month lead time from the TTN On-Line issue date.  For
  example, submission deadlines for "Calls for Papers" in the January
  issue of TTN On-Line would be for February and beyond.
o Length of submitted non-calendar items should not exceed 350 lines
  (about four pages).  Longer articles are OK and may be serialized.
o Length of submitted calendar items should not exceed 60 lines (one
  page).
o Publication of submitted items is determined by Software Research,
  Inc. and may be edited for style and content as necessary.

DISCLAIMER:  Articles and items are the opinions of their authors or
submitters; TTN-Online disclaims any responsibility for their content.

TRADEMARKS:  STW, TestWorks, CAPBAK, SMARTS, EXDIFF, Xdemo, Xvirtual,
Xflight, STW/Regression, STW/Coverage, STW/Advisor, TCAT, TCAT-PATH, T-
SCOPE and the SR logo are trademarks or registered trademarks of
Software Research, Inc. All other systems are either trademarks or
registered trademarks of their respective companies.

========================================================================
----------------->>>  TTN SUBSCRIPTION INFORMATION  <<<-----------------
========================================================================

To SUBSCRIBE to TTN-Online, to CANCEL a current subscription, to CHANGE
an address (a CANCEL and a SUBSCRIBE combined) or to submit or propose
an article, send E-mail to "ttn@soft.com".

TO SUBSCRIBE: Include in the body of your letter the phrase "subscribe
 ".

TO UNSUBSCRIBE: Include in the body of your letter the phrase
"unsubscribe  ".

                     TESTING TECHNIQUES NEWSLETTER
                        Software Research, Inc.
                            901 Minnesota Street
                   San Francisco, CA  94107 USA

              Phone:          +1 (415) 550-3020
              Toll Free:      +1 (800) 942-SOFT (USA Only)
              FAX:            +1 (415) 550-3030
              E-mail:         ttn@soft.com
              WWW URL:        http://www.soft.com

                               ## End ##