did-you-know? rent-now

Amazon no longer offers textbook rentals. We do!

did-you-know? rent-now

Amazon no longer offers textbook rentals. We do!

We're the #1 textbook rental company. Let us show you why.

9780131495050

xUnit Test Patterns Refactoring Test Code

by
  • ISBN13:

    9780131495050

  • ISBN10:

    0131495054

  • Format: Hardcover
  • Copyright: 2007-05-21
  • Publisher: Addison-Wesley Professional
  • Purchase Benefits
  • Free Shipping Icon Free Shipping On Orders Over $35!
    Your order must be $35 or more to qualify for free economy shipping. Bulk sales, PO's, Marketplace items, eBooks and apparel do not qualify for this offer.
  • eCampus.com Logo Get Rewarded for Ordering Your Textbooks! Enroll Now
List Price: $79.99 Save up to $8.00
  • Digital
    $71.99
    Add to Cart

    DURATION
    PRICE

Supplemental Materials

What is included with this book?

Summary

Improves software return on investment by teaching the reader how to refactor test code and reduce or prevent crippling test maintenance.

Author Biography

Gerard Meszaros is Chief Scientist and Senior Consultant at ClearStream Consulting

Table of Contents

Visual Summary of the Pattern Languagep. xvii
Forewordp. xix
Prefacep. xxi
Acknowledgmentsp. xxvii
Introductionp. xxix
Refactoring a Testp. xlv
The Narrativesp. 1
A Brief Tourp. 3
About This Chapterp. 3
The Simplest Test Automation Strategy That Could Possibly Workp. 3
Development Processp. 4
Customer Testsp. 5
Unit Testsp. 6
Design for Testabilityp. 7
Test Organizationp. 7
What's Next?p. 8
Test Smellsp. 9
About This Chapterp. 9
An Introduction to Test Smellsp. 9
What's a Test Smell?p. 10
Kinds of Test Smellsp. 10
What to Do about Smells?p. 11
A Catalog of Smellsp. 12
The Project Smellsp. 12
The Behavior Smellsp. 13
The Code Smellsp. 16
What's Next?p. 17
Goals of Test Automationp. 19
About This Chapterp. 19
Why Test?p. 19
Economics of Test Automationp. 20
Goals of Test Automationp. 21
Tests Should Help Us Improve Qualityp. 22
Tests Should Help Us Understand the SUTp. 23
Tests Should Reduce (and Not Introduce) Riskp. 23
Tests Should Be Easy to Runp. 25
Tests Should Be Easy to Write and Maintainp. 27
Tests Should Require Minimal Maintenance as the System Evolves Around Themp. 29
What's Next?p. 29
Philosophy of Test Automationp. 31
About This Chapterp. 31
Why Is Philosophy Important?p. 31
Some Philosophical Differencesp. 32
Test First or Last?p. 32
Tests or Examples?p. 33
Test-by-Test or Test All-at-Once?p. 33
Outside-In or Inside-Out?p. 34
State or Behavior Verification?p. 36
Fixture Design Upfront or Test-by-Test?p. 36
When Philosophies Differp. 37
My Philosophyp. 37
What's Next?p. 37
Principles of Test Automationp. 39
About This Chapterp. 39
The Principlesp. 39
What's Next?p. 48
Test Automation Strategyp. 49
About This Chapterp. 49
What's Strategic?p. 49
Which Kinds of Tests Should We Automate?p. 50
Per-Functionality Testsp. 50
Cross-Functional Testsp. 52
Which Tools Do We Use to Automate Which Tests?p. 53
Test Automation Ways and Meansp. 54
Introducing xUnitp. 56
The xUnit Sweet Spotp. 58
Which Test Fixture Strategy Do We Use?p. 58
What Is a Fixture?p. 59
Major Fixture Strategiesp. 60
Transient Fresh Fixturesp. 61
Persistent Fresh Fixturesp. 62
Shared Fixture Strategiesp. 63
How Do We Ensure Testability?p. 65
Test Last-at Your Perilp. 65
Design for Testability-Upfrontp. 65
Test-Driven Testabilityp. 66
Control Points and Observation Pointsp. 66
Interaction Styles and Testability Patternsp. 67
Divide and Testp. 71
What's Next?p. 73
xUnit Basicsp. 75
About This Chapterp. 75
An Introduction to xUnitp. 75
Common Featuresp. 76
The Bare Minimump. 76
Defining Testsp. 76
What's a Fixture?p. 78
Defining States of Testsp. 78
Running Testsp. 79
Test Resultsp. 79
Under the xUnit Coversp. 81
Test Commandsp. 82
Test Suite Objectsp. 82
xUnit in the Procedural Worldp. 82
What's Next?p. 83
Transient Fixture Managementp. 85
About This Chapterp. 85
Test Fixture Terminologyp. 86
What Is a Fixture?p. 86
What Is a Fresh Fixture?p. 87
What Is a Transient Fresh Fixture?p. 87
Building Fresh Fixturesp. 88
In-line Fixture Setupp. 88
Delegated Fixture Setupp. 89
Implicit Fixture Setupp. 91
Hybrid Fixture Setupp. 93
Tearing Down Transient Fresh Fixturesp. 93
What's Next?p. 94
Persistent Fixture Managementp. 95
About This Chapterp. 95
Managing Persistent Fresh Fixturesp. 95
What Makes Fixtures Persistent?p. 95
Issues Caused by Persistent Fresh Fixturesp. 96
Tearing Down Persistent Fresh Fixturesp. 97
Avoiding the Need for Teardownp. 100
Dealing with Slow Testsp. 102
Managing Shared Fixturesp. 103
Accessing Shared Fixturesp. 103
Triggering Shared Fixture Constructionp. 104
What's Next?p. 106
Result Verificationp. 107
About This Chapterp. 107
Making Tests Self-Checkingp. 107
Verify State or Behavior?p. 108
State Verificationp. 109
Using Built-in Assertionsp. 110
Delta Assertionsp. 111
External Result Verificationp. 111
Verifying Behaviorp. 112
Procedural Behavior Verificationp. 113
Expected Behavior Specificationp. 113
Reducing Test Code Duplicationp. 114
Expected Objectsp. 115
Custom Assertionsp. 116
Outcome-Describing Verification Methodp. 117
Parameterized and Data-Driven Testsp. 118
Avoiding Conditional Test Logicp. 119
Eliminating "if" Statementsp. 120
Eliminating Loopsp. 121
Other Techniquesp. 121
Working Backward, Outside-Inp. 121
Using Test-Driven Development to Write Test Utility Methodsp. 122
Where to Put Reusable Verification Logic?p. 122
What's Next?p. 123
Using Test Doublesp. 125
About This Chapterp. 125
What Are Indirect Inputs and Outputs?p. 125
Why Do We Care about Indirect Inputs?p. 126
Why Do We Care about Indirect Outputs?p. 126
How Do We Control Indirect Inputs?p. 128
How Do We Verify Indirect Outputs?p. 130
Testing with Doublesp. 133
Types of Test Doublesp. 133
Providing the Test Doublep. 140
Configuring the Test Doublep. 141
Installing the Test Doublep. 143
Other Uses of Test Doublesp. 148
Endoscopic Testingp. 149
Need-Driven Developmentp. 149
Speeding Up Fixture Setupp. 149
Speeding Up Test Executionp. 150
Other Considerationsp. 150
What's Next?p. 151
Organizing Our Testsp. 153
About This Chapterp. 153
Basic xUnit Mechanismsp. 153
Right-Sizing Test Methodsp. 154
Test Methods and Testcase Classesp. 155
Testcase Class per Classp. 155
Testcase Class per Featurep. 156
Testcase Class per Fixturep. 156
Choosing a Test Method Organization Strategyp. 158
Test Naming Conventionsp. 158
Organizing Test Suitesp. 160
Running Groups of Testsp. 160
Running a Single Testp. 161
Test Code Reusep. 162
Test Utility Method Locationsp. 163
TestCase Inheritance and Reusep. 163
Test File Organizationp. 164
Built-in Self-Testp. 164
Test Packagesp. 164
Test Dependenciesp. 165
What's Next?p. 165
Testing with Databasesp. 167
About This Chapterp. 167
Testing with Databasesp. 167
Why Test with Databases?p. 168
Issues with Databasesp. 168
Testing without Databasesp. 169
Testing the Databasep. 171
Testing Stored Proceduresp. 172
Testing the Data Access Layerp. 172
Ensuring Developer Independencep. 173
Testing with Databases (Again!)p. 173
What's Next?p. 174
A Roadmap to Effective Test Automationp. 175
About This Chapterp. 175
Test Automation Difficultyp. 175
Roadmap to Highly Maintainable Automated Testsp. 176
Exercise the Happy Path Codep. 177
Verify Direct Outputs of the Happy Pathp. 178
Verify Alternative Pathsp. 178
Verify Indirect Output Behaviorp. 179
Optimize Test Execution and Maintenancep. 180
What's Next?p. 181
The Test Smellsp. 183
Code Smellsp. 185
Obscure Testp. 186
Conditional Test Logicp. 200
Hard-to-Test Codep. 209
Test Code Duplicationp. 213
Test Logic in Productionp. 217
Behavior Smellsp. 223
Assertion Roulettep. 224
Erratic Testp. 228
Fragile Testp. 239
Frequent Debuggingp. 248
Manual Interventionp. 250
Slow Testsp. 253
Project Smellsp. 259
Buggy Testsp. 260
Developers Not Writing Testsp. 263
High Test Maintenance Costp. 265
Production Bugsp. 268
The Patternsp. 275
Test Strategy Patternsp. 277
Recorded Testp. 278
Scripted Testp. 285
Data-Driven Testp. 288
Test Automation Frameworkp. 298
Minimal Fixturep. 302
Standard Fixturep. 305
Fresh Fixturep. 311
Shared Fixturep. 317
Back Door Manipulationp. 327
Layer Testp. 337
xUnit Basics Patternsp. 347
Test Methodp. 348
Four-Phase Testp. 358
Assertion Methodp. 362
Assertion Messagep. 370
Testcase Classp. 373
Test Runnerp. 377
Testcase Objectp. 382
Test Suite Objectp. 387
Test Discoveryp. 393
Test Enumerationp. 399
Test Selectionp. 403
Fixture Setup Patternsp. 407
In-line Setupp. 408
Delegated Setupp. 411
Creation Methodp. 415
Implicit Setupp. 424
Prebuilt Fixturep. 429
Suite Fixture Setupp. 441
Setup Decoratorp. 447
Chained Testsp. 454
Result Verification Patternsp. 461
State Verificationp. 462
Behavior Verificationp. 468
Custom Assertionp. 474
Delta Assertionp. 485
Guard Assertionp. 490
Unfinished Test Assertionp. 494
Fixture Teardown Patternsp. 499
Garbage-Collected Teardownp. 500
Automated Teardownp. 503
In-line Teardownp. 509
Implicit Teardownp. 516
Test Double Patternsp. 521
Test Doublep. 522
Test Stubp. 529
Test Spyp. 538
Mock Objectp. 544
Fake Objectp. 551
Configurable Test Doublep. 558
Hard-Coded Test Doublep. 568
Test-Specific Subclassp. 579
Test Organization Patternsp. 591
Named Test Suitep. 592
Test Utility Methodp. 599
Parameterized Testp. 607
Testcase Class per Classp. 617
Testcase Class per Featurep. 624
Testcase Class per Fixturep. 631
Testcase Superclassp. 638
Test Helperp. 643
Database Patternsp. 649
Database Sandboxp. 650
Stored Procedure Testp. 654
Table Truncation Teardownp. 661
Transaction Rollback Teardownp. 668
Design-for-Testability Patternsp. 677
Dependency Injectionp. 678
Dependency Lookupp. 686
Humble Objectp. 695
Test Hookp. 709
Value Patternsp. 713
Literal Valuep. 714
Derived Valuep. 718
Generated Valuep. 723
Dummy Objectp. 728
Appendixesp. 733
Test Refactoringsp. 735
xUnit Terminologyp. 741
xUnit Family Membersp. 747
Toolsp. 753
Goals and Principlesp. 757
Smells, Aliases, and Causesp. 761
Patterns, Aliases, and Variationsp. 767
Glossaryp. 785
Referencesp. 819
Indexp. 835
Table of Contents provided by Ingram. All Rights Reserved.

Supplemental Materials

What is included with this book?

The New copy of this book will include any supplemental materials advertised. Please check the title of the book to determine if it should include any access cards, study guides, lab manuals, CDs, etc.

The Used, Rental and eBook copies of this book are not guaranteed to include any supplemental materials. Typically, only the book itself is included. This is true even if the title states it includes any access cards, study guides, lab manuals, CDs, etc.

Excerpts

The Value of Self-Testing Code In Chapter 4 ofRefactoringRef, Martin Fowler writes: If you look at how most programmers spend their time, you'll find that writing code is actually a small fraction. Some time is spent figuring out what ought to be going on, some time is spent designing, but most time is spent debugging. I'm sure every reader can remember long hours of debugging, often long into the night. Every programmer can tell a story of a bug that took a whole day (or more) to find. Fixing the bug is usually pretty quick, but finding it is a nightmare. And then when you do fix a bug, there's always a chance that anther one will appear and that you might not even notice it until much later. Then you spend ages finding that bug. Some software is very difficult to test manually. In these cases, we are often forced into writing test programs. I recall a project I was working on in 1996. My task was to build an event framework that would let client software register for an event and be notified when some other software raised that event (the Observer GOF pattern). I could not think of a way to test this framework without writing some sample client software. I had about 20 different scenarios I needed to test, so I coded up each scenario with the requisite number of observers, events, and event raisers. At first, I logged what was occurring in the console and scanned it manually. This scanning became very tedious very quickly. Being quite lazy, I naturally looked for an easier way to perform this testing. For each test I populated a Dictionary indexed by the expected event and the expected receiver of it with the name of the receiver as the value. When a particular receiver was notified of the event, it looked in the Dictionary for the entry indexed by itself and the event it had just received. If this entry existed, the receiver removed the entry. If it didn't, the receiver added the entry with an error message saying it was an unexpected event notification. After running all the tests, the test program merely looked in the Dictionary and printed out its contents if it was not empty. As a result, running all of my tests had a nearly zero cost. The tests either passed quietly or spewed a list of test failures. I had unwittingly discovered the concept of a Mock Object (page 544) and a Test Automation Framework (page 298) out of necessity! My First XP Project In late 1999, I attended the OOPSLA conference, where I picked up a copy of Kent Beck's new book,eXtreme Programming ExplainedXPE. I was used to doing iterative and incremental development and already believed in the value of automated unit testing, although I had not tried to apply it universally. I had a lot of respect for Kent, whom I had known since the first PLoP 1 conference in 1994. For all these reasons, I decided that it was worth trying to apply eXtreme Programming on a ClearStream Consulting project. Shortly after OOPSLA, I was fortunate to come across a suitable project for trying out this development approach--namely, an add-on application that interacted with an existing database but had no user interface. The client was open to developing software in a different way. We started doing eXtreme Programming "by the book" using pretty much all of the practices it recommended, including pair programming, collective ownership, and test-driven development. Of course, we encountered a few challenges in figuring out how to test some aspects of the behavior of the application, but we still managed to write tests for most of the code. Then, as the project progressed, I started to notice a disturbing trend: It was taking longer and longer to implement seemingly similar tasks. I explained the problem to the developers and asked them to record on each task card how much time had been spent writing new tests, modifying existing tests, and writing the production code. Very quickly,

Rewards Program