March 29, 2004

Interest in Open-Source Test Tools Grows

More and more software testers and programmers are becoming interested in using open-source tools for automated testing. Many are using them, and many more want to learn how. Analyst FTN Midwest Research reports that open source test tools are “gaining momentum” and cutting into Mercury Interactive’s market.

I’ve been speaking publicly on the topic for over a year, and even saw my Boston talk reported in Applicaton Development Trends. I just taught a one-day survey on the topic in Redmond last week. The class’s 60 seats were sold out, and the seminar went very well. I’ll be teaching the seminar, Homebrew Test Automation, in Austin on April 30, and I’m currently talking to local user groups in Minneapolis, Denver, and Los Angeles about bringing this material to local events they are planning. (Contact me if your local group might also be interested.)

Previously i’ve described why Extreme Programming teams are eager to build their own test tools. Indeed, that’s the source of the bulk of the open-source test tools that are now available. But the interest in open-source tools goes way beyond Extreme Programming.

Obviously, a big attraction is cost. Open-source tools are free. This is a particularly big deal for many smaller companies who just don’t have budgets for expensive testing tools any more.

But free doesn’t always mean free. Labor costs are still involved, and these could actually be larger with open-source tools.

Bob D‘Antoni told an interesting story about costs at AWTA5. His company was comparing an open-source tool (SAFS) with two commercial tools (TestPro and Certify). The commercial tool packages for his team each were priced at about $100,000 compared to free for open source. After their initial evaluation, his team wasn’t fully satisfied with any of the tools: each was missing an important feature important to his team—these varied from tool to tool. He figured that for $100,000, he should’ve been able to get commitments from the tool vendors to add the features he needed, but neither would make such a commitment. His company chose SAFS, and Bob had to hire a programmer for six months to add the feature they needed. So it really wasn’t free. But what they did get for their money was a chance to make the tool better fit their needs.

This kind of flexibility is a big part of the attraction of open-source test tools. Sometimes it’s a matter of preference. And other times it’s a matter of basic compatability. It’s not uncommon for commercial tools to run into compatibility problems that only they can debug. I’ve run into this on several occasions in my career.

In the first case, i was working for BMC as their Segue tool expert. We had some test suites that were working fine, and then we got a new build one week and none of them would run. I ended up having to get permission from our Vice President to send a copy of our software to Segue for them to debug. This was after a couple weeks of trying every other method for diagnosing the problem. I was pretty worried about my job at the time: if we couldn’t get this problem fixed, my skills would be pretty useless to my employer. But Segue’s engineers found a solution pretty quickly after our software arrived. They suggested a change that our developers just happened to have been considering anyway. So we were back in business. Happy ending.

Except that you don’t know when this kind of compatability problem will arise. On another occassion, i was helping a company with Mercury’s tool. They couldn’t get it to work until they were able to get a Mercury engineer on-site to debug the problem (both companies were in Silicon Valley). He recommended a couple patches, and then we were in business again. And on yet another occassion, i was helping a company with a compatability problem with Segue’s tool. They were skeptical when i told them they needed to send their software to Segue for diagnosis (they weren’t in the same town this time). Software companies are understandably reluctant to mail out copies of software under development that they don’t want their competition to know about. Would this really work? Their software was so hard to install that they couldn’t even send a CD. They had to install it on a machine, and then mail the entire machine to Segue for diagnosis. (The install was improved before the software was made available to customers.) And Segue, as i’d predicted, was able to find the problem and make reasonable recommendations for how the software could be modified to make it work with the tool.

So what’s the moral? I was able, on several occassions, to get tool venodors to diagnose compatability problems. But i suspect that there are many other automators who’ve given up when they’ve seen similar problems. I certainly have many reports of tool X not working with a particular product. What i don’t know is whether these compatability could have also been resolved if they’d only been successful in getting the tool vendor to thoroughly investigate the problem. The problem, in my view, isn’t with commercial test tool technology per se, but rather with the way its made available—or rather the way that key parts are unavailable.

Why is compatability such a problem? Actually, if your developers stick with generic, common UI technology, you won’t have problems. It’s when they use third-party controls or even dream up their own that you run the risk of compatability problems. And even then, a skilled toolsmith can often compensate. Indeed, many tool experts spend half their time adapting the tools to “custom” controls. These tools support many techniques for adaptation. But if the controls are too “custom,” then you have to get the vendor involved.

Sometimes its hard to get the assistance you need. The vendors’ support organizations are usually overwhelmed with calls for help from under-trained, time-pressed testers who‘re expected to make magic happen with the snazzy tool their company has bought them. It can be frustrating convincing them that you actually have the training and know the customization tricks, and still need help.

Couldn’t this trouble be avoided by just getting UI developers to stick to some reasonable rules? Why can’t developers just stick to some compatibility rules so these kinds of problems can be avoided?

The problem is that the rules are secret. I’ve talked to Mercury and Segue and other tool vendors. The rules are secret. The methods that each tool uses to locate, identify and interact with user interface elements are proprietary trade secrets that they are loathe to divulge. Their methods and algorithms are good enough that many users never run into these problems. But when they do, they are suddenly at the mercy of the vendor for assistance. And sometimes the answer is: you’ll have to wait six months for a fix.

Without being able to know the rules in advance, programmers get annoyed when you tell them that they did something to break the test tool. With open-source, it’s a completely different story. You can still have compatability problems, but they are much easier to diagnose. The developers are sometimes eager to jump in and help figure out the source of the problem. They’ll probably learn something interesting along the way. And the VP’s don’t even have to know about it.

So although cost motivates many testers and developers to take a look at open-source test tools, it’s the flexibility and control that they offer that often sells them on them. But at what cost? Does the increased labor they may require make them a money-losing choice in the end? Compatability problems may be easier to diagnose, but you‘re probably going to have fix them yourself. How hard will this be?

It may not be as hard as you think. One reason is that it is often easier to make software more testable than it is to make tools more compatable with your software. With both sides under your control, you‘re in a better position to find a workable design. The second reason is that “reflection” technology is now built in to COM, Java and dot-Net. Reflection allows you to ask software what objects it contains and what methods they are capable of. Early test tools had to add this capability—no minor feat. Now it’s built in to the technology most developers are using.

But even so, i don’t think that open-source tools offer significant cost savings compared to commercial tools. Successful automation projects have always put spent the bulk of their budget on labor costs. The sad fact is that too many companies had been convinced to fork over the bulk of their automation budget for tools with the expectation that automated testing would be easy. With commercial tools, it occassionally is. But usually it ended up requiring more skill and effort than many companies realized. The result has been a lot of shelfware.

With open-source tools and home-built automation, these illusions are gone. Companies realize that they’ll have to dedicate labor, and they are in a position to cut back on foundering projects and continue to fund successful ones. So the big attraction to open-source tools isn’t that they reduce total costs, but they don’t require big, risky up-front tool purchases.

They also allow companies to make better use of the skills they already have on their teams. Open-source tools tend to use common programming languages, thus allowing developers and testers with general programming talent to contribute, rather than requiring automators who know the tool quirks, proprietary algorithms, and vendor-specific languages.

There’s one more reason why i’m promoting open-source solutions. There have been many good test suites put together using commercial tools that weren’t able to be used to their full advantage because they depended on node-bound licenses. Developers would love to run automated tests on their own builds. They could rapidly find and fix their own bugs without having to work through a separate testing team ... except that they don’t have a license to the testing tool. So instead they wait until one of the testers with a tool license is free to test their software. And the entire pace of development slows down. The testing team remains a bottleneck. Problems are found long after they were introduced, and extra effort is now required to figure out who introduced them, and when. Developers think they’ve fixed reported bugs only to be told to try again. The sad thing is that these are exactly the kinds of problems that automation should really be eliminating.

Note that the attractiveness of open-source test tools has a lot to do with several structural aspects of commericial tools as they are currently designed and marketed. Commercial tools require large up-front payouts, rather than ongoing payments for the tools that are actually being used. They are expensively priced per seat, rather than using site-wide licenses that would encourage customers to make maximum use of the testsuites they’ve developed. They keep their test interfaces private, and many tools still use proprietary vendorscripts, rather than languages that are more likely to be known by many programmers and testers.

Note that many of these elements allow tool vendors to control and take advantage of their relationship with their customers. Large up-front payments requires customers to make decisions when they are least informed. Vendorscripts lock customers into a tool, making tool changes later nearly unthinkable.

As open-source test tools get more attention and usage from testers and developers, i expect that one of the test tool vendors will realize that there is a big opportunity in realigning their business model so that their revenues are more closely tied to their customer’s success and less closely tied to the abilities of their salesman to wow customers.

Until that happens, i plan to continue to help people learn how they can maximize their use of open-source and homebrew test tools. It’s become a major component to my consulting practice, and i’m running into more and more people are seeing success. It works, and a lot of people are really happy with it.

In the past, vendors sold their tools by showing that they made financial sense. They preached, “Why build when you can buy?” But it isn’t about buy vs. build. With a commercial tool, you buy and then build. Why not just build? Many testers and developers think they actually get better platforms for building test suites with open-source tools.

Posted by bret at 11:24 AM | Comments (1)

March 13, 2004

Web Testing with Ruby

Brian Marick and i designed a class called Scripting for Testers. The premise of the class is that automated testing is fundamentally about exercising a programmable interface. We immerse students in an environment where they get to do this. We tried testing several different interfaces and the one that students have enjoyed testing the most is an interface to Internet Explorer. We drive this in order to test a web-based application. I’ve been extolling the central importance of interfaces in my Test Automation Patterns class for years, but there is something about actually doing it that really helps people understand.

We will be teaching this class in Portland, Seattle, Orlando, Austin and Calgary this spring and summer. (See schedule.)

We have also made the course materials publicly available. You can find links to materials we used last fall here. They are still fairly disorganized. I have an updated version of the materials that i will formally release to the Web Testing with Ruby project on Rubyforge. You are free to use these materials for self study or to teach a class yourself. They aren’t really set up for self-study, however. Not yet. I plan to be working with the Center for Software Testing Education Research at Florida Tech on this as part of their larger program to provide open-source materials to support software testing education.

I’ve written before about the class and why we‘re using ruby. I could imagine teaching other classes on scripting for testers someday in Python or Tcl. Like Ruby, both those languages are weakly typed (or “duck-typed“) and have interactive interpreters. Ruby’s is invoked using “irb“; Python’s with just “python” and no file name; Tcl’s with “tclsh“.

(Perl fans are likely to interject that “perl -d” can be used to invoke an interactive interpreter. Maybe, but’s never worked for me; and, more tellingly, i’ve never seen this feature mentioned in introductory texts.)

Interactive intepreters are great ways to learn languages. I used them years ago to teach Logo and Basic to elementary students. They are also great for exploratory automated testing.

Instead of a recorder, we use Ruby’s interactive interpreter to figure out how to write test scripts. Some commands tell us the names and attributes of the objects currently displayed in the web interface. Others can be used to enter text, set checkboxes or click buttons.

Ruby comes with a library that supports accessing COM, Microsoft’s technology designed to allow different programs to interact with each other. It was originally released as the core technology that supported the integration of Word, PowerPoint and Excell into their Office suite. It’s designed to allow libraries to be called from programs in various languages. We use Ruby to access the COM interface to Internet Explorer—including the DOM (Document Object Model).

Chris Morris wrote a Ruby library to simplify access to these libraries. We’ve been using this in our class. Chris demonstrated the newest release of the library at AWTA5. It’s available on Rubyforge under the Web Testing with Ruby project. Paul Rogers and Jonathan Kohl have also been testing web applications using Ruby to drive IE’s COM interface. And we’ve all been sharing code with each other. Indeed, i’m very pleased that Paul and Jonathan will be joining us when we teach the class at XP Agile Universe in Calgary in August.

Posted by bret at 04:32 PM | Comments (2)

March 08, 2004

Testing Private Interfaces

I am aware of no other field where testing is restricted to publically available interfaces. Not one. Every engineering field I'm aware of installs internal interfaces that are there for testing.

In other words, anyone who argues that tests need to be limited by publically available interfaces is flying in the face of the experience of the rest of the world.

--John Roth

Posted by bret at 11:44 PM | Comments (1)

The Book for Ruby

After recently listing my favorite Python books, some people got the impression that i now favor Python over Ruby. Not so.

Why's (Poignant) Guide to Ruby is not only a good introduction to Ruby, but it is also a very funny book. It remind's me of Mr. Bunny's Guide to ActiveX . Except Mr. Bunny's humor was based on sarcasm, and Why's comes from sheer joy. And the more i program in Ruby, the more i understand that Ruby is a joy to read and a joy to write. In a word, poignant.

I just showed Why's Guide to my son. He's not a programmer. He's an actor. He had an audition this morning and a performance this afternoon. He thought Why's Guide was pretty funny. "This is pretty entertaining," he said. And he kept reading when Why started showing code. The code examples were so funny that my son thought it was a set up. "Will that really work?" Yeah, it really will.

Why's Guide is actually a flippy book like Computer Lib/Dream Machines. On the front was Computer Lib and if you flipped it over, the back cover was actually the front cover to to Dream Machines. Trippy in a low-tech way. The front half of Why's Guide is a cross between a computer language manual and Mad Magazine. But the flip side is hard to find. The book is online, so you can't physically flip it over. So it took a little digging to find. The flip side is the source code for the book . It's in Ruby, naturally. And also in YAML and Textile, text formats that Ruby can understand. The Ruby code reads in that text and converts it to HTML. I learned as much reading the code that generates the book as i did reading the code in the book. I enjoyed it as well. I stayed up to the wee hours the other night reading it. In fact i'm now writing this blog entry in Textile, and using Ruby to convert it to HTML. Thus:

require 'redcloth'
print RedCloth.new( File.open( ARGV[0] ){ | f | f.read } ).to_html
Posted by bret at 02:02 AM | Comments (1)

March 01, 2004

Movies for Testers

James Bach presents his collection of movies with hidden lessons about software testing. How is testing like learning to describe the geography of the moon? Well, you just got to see them. I've seen about half and plan to catch the rest.
Posted by bret at 10:34 AM | Comments (0)