July 29, 2004

Software Testing Books

Have you read Lessons Learned in Software Testing? Are you looking for other testing books to read? Here are the books I'm recommending.

Pragmatic Unit Testing, Hunt & Thomas. There has been a resurgent interest amongst developers in unit testing, mostly spurred on by Extreme Programming, test-driven development and the JUnit testing harness. But most of the discussion has been on how tests can shape design, rather than on how tests can test. This book does the best job of describing how JUnit-style unit tests can actually be used for testing. The book is available in two flavors: one in Java for JUnit and one in C# for NUnit.

Testing Applications on the Web, Nguyen, et al. Most testing books claim to describe testing in general, but base the advice on experience from only one or two platforms -- most books, including mine. Nguyen's book, on the other hand, is quite specific. It provides detailed advise on how to test web-based applications, including how the architecture of your application should affect your testing strategy. If you are testing web software, you need to have this book on your desk. This is a recently updated second edition.

How to Break Software, Whittaker & Jorgensen. This slim volume presents a series of testing techniques, dubbed "attacks", that target common software errors. The list is based on an empirical analysis of a large number of bugs found in commercial software by the software testing labs at the Florida Institute of Technology. Each attack is illustrated with an actual bug found in everyday software. The analysis and the examples are mostly drawn from Microsoft software.

Software Testing, Patton. The most approachable book covering the array of traditional testing techniques is also a good introduction for the junior tester. The book is based on the internal training program at Microsoft, which trains more testers than any other company and employs many of the leading testing experts in the field.

Posted by bret at 03:03 PM | Comments (0)

July 13, 2004

Great Regression Tests May Never Find Bugs

A general rule is that a test has value proportionate to its ability to find bugs. I think this is mostly true. Sometimes this ability is referred to as the power of a test. Powerful tests are more likely to uncover bugs than weak tests.

Poweful automated regression tests, therefore, should be good at finding regression bugs. Or so you might think. But i don't think this is true.

The most useful regression tests can be run by a programmer with her own code, before it is checked in. If this happens, then the programmer really should fix the bug before it is checked in. But if this happens, there may be no evidence that the test suite ever found this bug.

Is this good or bad? If you want metrics that you can use to help justify the value of the test suite, it could be bad. You may prefer an arrangement that has the test suite run by someone else after the code (with the as-yet-unfound bug) has been checked in. Indeed many test suites have configuration or licensing requirements that make this a necessity.

I find this kind of independent regression testing problematic. It slows things down, dragging out the test/fix cycle. It reduces agility. I'd prefer it if my test suites were run by programmers directly, so that they could fix any problems immediately.

Indeed, this is exactly how most unit test suites are used. By design, they can run in a developer's environment. And by design, they run fast, so that they have no reason not to run them. Because they are so easy to run, many developers have taken a newfound interest in reworking code ("refactoring"). If they make a mistake, they can quickly find and fix it.

Because they don't appear to find bugs, many have suggested that these things are no longer testing and aren't properly called tests. They must be providing value in some other way. They are "refactoring aids" or "progress indicators" or "checked examples."

But actually they may be finding tons of bugs. It's just that the bugs are being fixed as fast as they are made. I'm going to still call these things "tests." And i think that we should try to make more of our automated regression tests so easy to use that they might easily give the impression that they no longer find any bugs at all.

Posted by bret at 12:48 AM | Comments (4)

July 10, 2004

Joining ThoughtWorks

I'm pleased to announce that I'm joining ThoughtWorks, an international consulting firm specializing in agile development, agile project management, and -- believe it or not -- agile testing. They have a great consulting staff and have recently recruited several notable independent consultants. I will be continuing to write on my blog, participate in software conferences, publish articles, and release open source software.
Posted by bret at 09:02 PM | Comments (0)

July 06, 2004

Maintenance Engineer Wanted

Remember the XA/21 energy management software whose failure contributed to last year's blackout in the Northeast? They are looking for a systems engineer to help with "warrenty problems."
Posted by bret at 02:14 PM | Comments (0)