San Francisco, California,
USA |
29 May - 1 June 2001
|
BOFS Participation Wanted
QW2001 Birds of a Feather Sessions (BOFSs) are organized by
QW2001 Advisory Board Member
Mark Wiley.
If you are a QW2001 speaker or a QW2001 attendee and
you are interested in attending a BOFS on
a particular topic,
or you would like to chair a BOFS,
please contact Mark at:
mwiley2@earthlink.net.
How The QW2001 BOFSs Work
BOFSs are scheduled to occur during the luncheon hour, during breaks, or
after the last conference session.
Locations of each meeting will be announced as soon as all subjects and moderators are identified.
BOFS Topics
Life as a New Test Manager
Moderated by Joanna Rothman.
Congratulations! You've just been promoted to testmanager. Now what do you
do?
We all have our own ways of becoming successful testmanagers. Please attend
this BOF to share what's worked for you or your manager. Some of the topics
we'll discuss: drawing the line between project management and test
management, how to get the rest of the organization to listen to you, how to
know that you're managing the right things.
Possible Areas for Discussion
What do you do, what does the project manager?
- What are your responsibilities, how do they intersect with the
project manager?
- What other matrix management issues do you have?
Influence and power, and how to use them
- How do you get others to listen to you?
- Do you have to be right all the time?
- Where is the power in your organization?
What are the right things to manage?
- What kinds of people have you hired/inherited? What bearing do their
skills have on your work?
- Do you have to create vision and mission statements?
- What's your style of management?
Creating a Curriculum for Software Testing
Moderated by Cem Kaner.
At Florida Tech, we are building an undergraduate educational program in
software testing. At some point, this will turn into a degree program and
not just a cluster of options within a computer science degree. I want to
discuss with you some ideas of what such a degree program
(CS, specialization in testing) should contain and how we could publicize the
degree program to potentially interested undergraduates.
Managing Requirements in
Internet Time
Moderated by Brian Lawrence
A hallmark of Internet development is the perception that our software must
be delivered as rapidly as possible. Does this mean that we should choose
not to do some of the things we know we should if we had more time, such as
modeling and managing requirements?
In this BOF I will lead a discussion about how we manage our software's
requirements when we feel that don't have time to do extensive
documentation. What are the "shortcuts' that really work? How can we
maintain reasonable control of our requirements with the optimal investment
of time and effort?
How do we strike a balance between system definition and implementation?
Please bring your questions, problems, and experiences
to this discussion.
Rapid Testing Without Sacrificing
Software Quality
Moderated by Eric Patel and Nancy Landau
The past couple of years have seen the introduction of numerous
methodologies and techniques on how to go about testing Web-based
applications. The problem is that nobody usually has time to do all of the
tests that can (or should) be done. So what do you do? How do you strike a
balance between doing thorough testing and delivering "good enough" software
quality? In this BOF we will discuss the tradeoffs and compromises that we
must make if we are to release Web-based software in Internet time.
Extreme Programming (XP) Testing Practices
Moderated by Lisa Crispin
The tester on an XP team faces multiple challenges. Can you really get
the customer to write the acceptance tests? Are you really supposed to
automate ALL the tests, and if so, how can you possibly do it fast
enough to keep up with one- to three-week iterations? Does the tester
also have a quality assurance role? No doubt different XP testers have
taken radically different approaches. Please attend this BOF to share
what's worked for you and your team.
Possible Areas for Discussion:
- Do you pair test? Do you refactor your tests? What XP practices can be
applied to testing?
- How do you or your customer write acceptance tests?
- What testing should you automate? How can you automate fast enough for
XP iterations?
- Do you test before the end of each iteration, or are you testing one
iteration behind?
- What automated test tools have worked for you?
- How do you keep the team informed about testing and test results?
- Do you work as part of the development team? Or are you more of a customer advocate? Or both?
- How do you handle fixing defects?
What are the "Holy Grails of Testing"?
Moderated by Hans Buwalda and Hung Nguyen
In his speech on Wednesday Hans presents some of his ideas on what factors
are essential in test development under the title "The Three 'Holy Grails'
of Test Development". Also, Hung will present on Friday a case study which
uses some of these ideas and extends them to the test automation area.
However, the views presented are not the only ones possible. Also the
"grails" presented are limited to test development, while other topics, for
example in test management or test automation, might be equally "holy"
(defined as: important and hard to find).
In fact, the presentations are meant to stimulate discussion.
In this Birds of a Feather session we would like to let this discussion begin.
Some possible topics are:
- Did you find the points presented in the keynote relevant as they were
presented?
- What are your answers to the challenges presented in the speech
(anybody found a grail)?
- Which other points do you think are essential to make testing
successful?
- How would you apply them to the "special" areas of testing, like the
web or technical software?
Handling Sticky Situations in Testing
Moderated By Elisabeth Hendrickson
As testers, quality engineers, and managers, our days are often filled with
difficult interactions and sticky situations. We need to gather
information from folks who don't have the time to talk to us or who seem to
resist telling us what we need to know. We want to convince other groups
to change what they give us, when they involve us, and sometimes even how
they do their work. We often find ourselves arguing over which bugs to
fix, which features to keep or cut. We are often blamed for bad releases
and for not shipping on time. Sometimes we feel like we're helping;
sometimes we feel like we can't win.
It helps to talk about difficult interactions and sticky situations, to
hear how others have handled similar problems, to share our
experiences. What do you do when you're being blamed for things over which
you have no control? How do you encourage others over whom you have no
direct authority to change the way they do business?
Bring your stories, the good and the bad, and come share your experiences.
Performance, Load and Stress Testing
Moderated By Ross Collard
The intention of this session is to examine the major issues of performance
testing, such as:
- Determining the right mix of demands to place on the system during
performance evaluation.
- Determining what and how to measure.
- Establishing the test environment, including scalability from the test
lab to the live operational environment.
- Evaluation and selection of load testing tools.
- Interpretation of performance measurements and test results.
- The relationship between performance testing and system tuning.
Plan to share you experiences, or just listen if this area is new to you.
Streamlining the Software CMM for Today's Projects and Organizations
Moderated By Bill Deibler
The SEI Software CMM is a comprehensive model that can serve as a basis
for assessing and improving the effectiveness of software development
organizations. The CMM was derived from the requirements of government
purchasing agencies overseeing large, complex, third-party development
projects. Because of their large project focus, the practices described
in the CMM can appear to small, internal, or commercial software
development organizations to be inapplicable or burdensome and
bureaucratic.
Version 1.1 of the CMM is published in two technical reports containing a
total of nearly 600 pages. The size of the CMM makes it difficult to
uncover the interrelationships among the elements that are essential to
tailoring the model to a small software development environment. It also
makes the model intimidating.
The BOF's discussion will seek to answer questions such as:
- How does one implement a realistic and useful strategy for deploying
software development practices in today's commercial organizations?
- Is it possible to simplify the CMM to support appropriate, effective,
flexible software development processes for any size organization?
- How does one resolve apparent discrepancies between the guidance in the
CMM and the needs of small, commercial and internal software development
projects and organizations?
- How does one identify and prioritize elements of advanced levels that
should be considered by every organization?
|