Exploratory Software Testing Tips, Tricks, Tours, and Techniques to Guide Test Design

  • ISBN13:


  • ISBN10:


  • Edition: 1st
  • Format: Paperback
  • Copyright: 2009-08-25
  • Publisher: Addison-Wesley Professional
  • Purchase Benefits
  • Free Shipping On Orders Over $35!
    Your order must be $35 or more to qualify for free economy shipping. Bulk sales, PO's, Marketplace items, eBooks and apparel do not qualify for this offer.
  • Get Rewarded for Ordering Your Textbooks! Enroll Now
List Price: $46.99 Save up to $7.05
  • Buy New
    Add to Cart Free Shipping


Supplemental Materials

What is included with this book?

  • The New copy of this book will include any supplemental materials advertised. Please check the title of the book to determine if it should include any access cards, study guides, lab manuals, CDs, etc.
  • The eBook copy of this book is not guaranteed to include any supplemental materials. Typically, only the book itself is included. This is true even if the title states it includes any access cards, study guides, lab manuals, CDs, etc.


How to Find and Fix the Killer Software Bugs that Evade Conventional Testing - In Exploratory Software Testing, renowned software testing expert James Whittaker reveals the real causes of today's most serious, well-hidden software bugs--and introduces powerful new "exploratory" techniques for finding and correcting them. Drawing on nearly two decades of experience working at the cutting edge of testing with Google, Microsoft, and other top software organizations, Whittaker introduces innovative new processes for manual testing that are repeatable, prescriptive, teachable, and extremely effective. Whittaker defines both in-the-small techniques for individual testers and in-the-large techniques to supercharge test teams. He also introduces a hybrid strategy for injecting exploratory concepts into traditional scripted testing. You'll learn when to use each, and how to use them all successfully. Concise, entertaining, and actionable, this book introduces robust techniques that have been used extensively by real testers on shipping software, illuminating their actual experiences with these techniques, and the results they've achieved. Writing for testers, QA specialists, developers, program managers, and architects alike, Whittaker answers crucial questions such as: * Why do some bugs remain invisible to automated testing--and how can I uncover them? * What techniques will help me consistently discover and eliminate "show stopper" bugs? * How do I make manual testing more effective--and less boring and unpleasant? * What's the most effective high-level test strategy for each project? * Which inputs should I test when I can't test them all? * Which test cases will provide the best feature coverage? * How can I get better results by combining exploratory testing with traditional script or scenario-based testing? * How do I reflect feedback from the development process, such as code changes?

Author Biography

James Whittaker has spent his career in software testing and has left his mark on many aspects of the discipline. He was a pioneer in the field of model-based testing, where his Ph.D. dissertation from the University of Tennessee stands as a standard reference on the subject. His work in fault injection produced the highly acclaimed runtime fault injection tool Holodeck, and he was an early thought leader in security and penetration testing. He is also well regarded as a teacher and presenter, and has won numerous best paper and best presentation awards at international conferences. While a professor at Florida Tech, his teaching of software testing attracted dozens of sponsors from both industry and world governments, and his students were highly sought after for their depth of technical knowledge in testing.


Dr. Whittaker is the author of How to Break Software and its series follow- ups How to Break Software Security (with Hugh Thompson) and How to Break Web Software (with Mike Andrews). After ten years as a professor, he joined Microsoft in 2006 and left in 2009 to join Google as the Director of Test Engineering for the Kirkland and Seattle offices. He lives in Woodinville, Washington, and is working toward a day when software just works.


Table of Contents

Foreword by Alan Page     xv

Preface     xvii


Chapter 1    The Case for Software Quality     1

The Magic of Software     1

The Failure of Software     4

Conclusion     9

Exercises     9


Chapter 2    The Case for Manual Testing     11

The Origin of Software Bugs     11

Preventing and Detecting Bugs     12

Manual Testing     14

Conclusion     19

Exercises     20


Chapter 3    Exploratory Testing in the Small     21

So You Want to Test Software?     21

Testing Is About Varying Things     23

User Input     23

    What You Need to Know About User Input     24

    How to Test User Input     25

State     32

    What You Need to Know About Software State     32

    How to Test Software State     33

Code Paths     35

User Data     36

Environment     36

Conclusion     37

Exercises     38


Chapter 4    Exploratory Testing in the Large     39

Exploring Software     39

The Tourist Metaphor     41

“Touring” Tests     43

    Tours of the Business District     45

    Tours Through the Historical District     51

    Tours Through the Entertainment District     52

    Tours Through the Tourist District     55

    Tours Through the Hotel District     58

    Tours Through the Seedy District     60

Putting the Tours to Use     62

Conclusion     63

Exercises     64


Chapter 5    Hybrid Exploratory Testing Techniques     65

Scenarios and Exploration     65

Applying Scenario-Based Exploratory Testing     67

Introducing Variation Through Scenario Operators     68

    Inserting Steps     68

    Removing Steps     69

    Replacing Steps     70

    Repeating Steps     70

    Data Substitution     70

    Environment Substitution     71

Introducing Variation Through Tours     72

    The Money Tour     73

    The Landmark Tour     73

    The Intellectual Tour     73

    The Back Alley Tour     73

    The Obsessive-Compulsive Tour     73

    The All-Nighter Tour     74

    The Saboteur     74

    The Collector’s Tour     74

    The Supermodel Tour     74

    The Supporting Actor Tour     74

    The Rained-Out Tour     75

    The Tour-Crasher Tour     75

Conclusion     75

Exercises     76


Chapter 6    Exploratory Testing in Practice     77

The Touring Test     77

Touring the Dynamics AX Client     78

    Useful Tours for Exploration     79

    The Collector’s Tour and Bugs as Souvenirs     81

    Tour Tips     84

Using Tours to Find Bugs     86

    Testing a Test Case Management Solution     86

    The Rained-Out Tour     87

    The Saboteur     88

    The FedEx Tour     89

    The TOGOF Tour     90

The Practice of Tours in Windows Mobile Devices     90

    My Approach/Philosophy to Testing    91

    Interesting Bugs Found Using Tours     92

    Example of the Saboteur     94

    Example of the Supermodel Tour     94

The Practice of Tours in Windows Media Player     97

    Windows Media Player     97

    The Garbage Collector’s Tour     97

    The Supermodel Tour     100

    The Intellectual Tour     100

    The Intellectual Tour: Boundary Subtour     102

    The Parking Lot Tour and the Practice of Tours in Visual Studio Team System Test Edition     103

Tours in Sprints     103

Parking Lot Tour     105

Test Planning and Managing with Tours     106

Defining the Landscape     106

Planning with Tours     107

Letting the Tours Run     109

Analysis of Tour Results     109

Making the Call: Milestone/Release     110

    In Practice     110

Conclusion     111

Exercises     111


Chapter 7    Touring and Testing’s Primary Pain Points     113

The Five Pain Points of Software Testing     113

Aimlessness     114

    Define What Needs to Be Tested     115

    Determine When to Test     115

    Determine How to Test     116

Repetitiveness     116

    Know What Testing Has Already Occurred     117

    Understand When to Inject Variation     117

Transiency     118

Monotony     119

Memorylessness     120

Conclusion     121

Exercises     122


Chapter 8    The Future of Software Testing     123

Welcome to the Future     123

The Heads-Up Display for Testers     124

“Testipedia”     126

    Test Case Reuse     127

    Test Atoms and Test Molecules     128

Virtualization of Test Assets     129

Visualization     129

Testing in the Future     132

Post-Release Testing     134

Conclusion     134

Exercises     135


Appendix A    Building a Successful Career in Testing     137

How Did You Get into Testing?     137

Back to the Future     138

The Ascent     139

The Summit     140

The Descent     142


Appendix B    A Selection of JW’s Professorial “Blog”     143

Teach Me Something     143

Software’s Ten Commandments     143

    1. Thou Shalt Pummel Thine App with Multitudes of Input     145

    2. Thou Shalt Covet Thy Neighbor’s Apps     145

    3. Thou Shalt Seek Thee Out the Wise Oracle     146

    4. Thou Shalt Not Worship Irreproducible Failures     146

    5. Thou Shalt Honor Thy Model and Automation     146

    6. Thou Shalt Hold Thy Developers Sins Against Them     147

    7. Thou Shalt Revel in App Murder (Celebrate the BSOD)     147

    8. Thou Shalt Keep Holy the Sabbath (Release)     148

    9. Thou Shalt Covet Thy Developer’s Source Code     148

Testing Error Code     149

Will the Real Professional Testers Please Step Forward     151

    The Common Denominators I Found Are (In No Particular Order)     152

    My Advice Can Be Summarized as Follows      53

Strike Three, Time for a New Batter     154

    Formal Methods     154

    Tools     155

    Process Improvement     156

    The Fourth Proposal     156

Software Testing as an Art, a Craft and a Discipline     157

Restoring Respect to the Software Industry     160

    The Well-Intentioned but Off-Target Past     160

    Moving On to Better Ideas     161

    A Process for Analyzing Security Holes and Quality Problems     161


Appendix C    An Annotated Transcript of JW’s Microsoft Blog     165

Into the Blogoshere     165

July 2008     166

    Before We Begin     166

    PEST (Pub Exploration and Software Testing)     167

    Measuring Testers     168

    Prevention Versus Cure (Part 1)     169

    Users and Johns     170

    Ode to the Manual Tester     171

    Prevention Versus Cure (Part 2)     173

    Hail Europe!     174

    The Poetry of Testing     175

    Prevention Versus Cure (Part 3)     176

    Back to Testing     177

August 2008     178

    Prevention Versus Cure (Part 4)     179

    If Microsoft Is So Good at Testing, Why Does Your Software Still Suck?     180

    Prevention Versus Cure (Part 5)     183

    Freestyle Exploratory Testing     183

    Scenario-Based Exploratory Testing     183

    Strategy-Based Exploratory Testing     184

    Feedback-Based Exploratory Testing     184

    The Future of Testing (Part 1)     184

    The Future of Testing (Part 2)     186

September 2008     188

    On Certification     188

    The Future of Testing (Part 3)     189

    The Future of Testing (Part 4)     191

    The Future of Testing (Part 5)      192

October 2008     193

    The Future of Testing (Part 6)     194

    The Future of Testing (Part 7)     195

    The Future of Testing (Part 8)     196

    Speaking of Google     198

    Manual Versus Automated Testing Again     198

November 2008     199

    Software Tester Wanted     200

    Keeping Testers in Test     200

December 2008     201

    Google Versus Microsoft and the Dev:Test Ratio Debate     201

January 2009     202

    The Zune Issue     203

    Exploratory Testing Explained     204

    Test Case Reuse     205

    More About Test Case Reuse     206

    I’m Back     207

    Of Moles and Tainted Peanuts     208


Index     211



Preface Preface"Customers buy features and tolerate bugs." --Scott WadsworthAnyone who has ever used a computer understands that software fails. From the very first program to the most recent modern application, software has never been perfect.Nor is it ever likely to be. Not only is software development insanely complex and the humans who perform it characteristically error prone, the constant flux in hardware, operating systems, runtime environments, drivers, platforms, databases and so forth converges to make the task of software development one of humankind's most amazing accomplishments.But amazing isn't enough, as Chapter 1, "The Case for Software Quality," points out, the world needs it to be high quality too.Clearly, quality is not an exclusive concern of software testers. Software needs to be built the right way, with reliability, security, performance and so forth part of the design of the system rather than a late-cycle afterthought. However, testers are on the front lines when it comes to understanding the nature of software bugs. There is little hope of a broad-based solution to software quality without testers being at the forefront of the insights, techniques, and mitigations that will make such a possibility into a reality.There are many ways to talk about software quality and many interested audiences. This book is written for software testers and is about a specific class of bugs that I believe are more important than any other: bugs that evade all means of detection and end up in a released product.Any company that produces software ships bugs. Why did those bugs get written? Why weren't they found in code reviews, unit testing, static analysis or other developer-oriented activity? Why didn't the test automation find them? What was it about those bugs that allowed them to avoid manual testing?What is the best way to find bugs that ship?It is this last question that this book addresses. In Chapter 2, "The Case for Manual Testing," I make the point that because users find these bugs while using the software, testing must also use the software to find them. For automation, unit testing, and so forth, these bugs are simply inaccessible. Automate all you want, these bugs will defy you and resurface to plague your users.The problem is that much of the modern practice of manual testing is aimless, ad hoc, and repetitive. Downright boring, some might add. This book seeks to add guidance, technique and organization to the process of manual testing.In Chapter 3, "Exploratory Testing in the Small," guidance is given to testers for the small, tactical decisions they must make with nearly every test case. They must decide which input values to apply to a specific input field or which data to provide in a file that an application consumes. Many such small decisions must be made while testing and without guidance such decisions often go unanalyzed and are suboptimal. Is the integer 4 better than the integer 400 when you have to enter a number into a text box? Do I apply a string of length 32 or 256? There are indeed reasons to select one over the other, depending on the context of the software that will process that input. Given that testers make hundreds of such small decisions every day, good guidance is crucial.In Ch

Rewards Program

Write a Review