February 28, 2003

Windows Scripting

I've been using vmWare for testing a lot. It's a great utility, allowing me to set up isolated test environments on virtual machines on my laptop. It gives me a portable test lab.

But it has a few quirks. One is that it really doesn't like to have autorun enabled for my CD-ROM. If you do, it nags you every time you run it until you agree to let it turn it off. But with it off, not only doesn't my laptop autorun CD's, it often doesn't recognize when i've swapped CD's. I needed to install a lot of software from CD and decided to find out how to turn it back on.

vmWare doesn't give you a simple way to do this, but they do direct you to instructions in the Microsoft Knowledge Base:

WARNING: If you use Registry Editor incorrectly, you may cause serious problems that may require you to reinstall your operating system. Microsoft cannot guarantee that you can solve problems that result from using Registry Editor incorrectly. Use Registry Editor at your own risk.

And then they tell you how to make the change if you are truly brave enough to hazard using regedit:

  1. Click Start, click Run, type regedit in the Open box, and then press ENTER.
  2. Locate and click the following registry key:
  3. To disable automatically running CD-ROMs, change the Autorun value to 0 (zero). To enable automatically running CD-ROMs, change the Autorun value to 1.
  4. Restart your computer.

I'd just installed Windows Scripting Host, and i figured that i was going to have to be turning autorun on again, after using vmWare -- so why not write a script? How hard could it be?

It was easy:

Set Sh = CreateObject("WScript.Shell")
key = "HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\CDRom\Autorun"
Sh.RegWrite key, 1, "REG_DWORD"

It was so easy that the big question is: why didn't either vmWare or Microsoft give users this script? There'd be no need for the regedit warning. And it would be a whole lot easier for people.

On Unix, this would be the natural answer. But then no one would ever think of concieving a Unix platform that didn't have the ability to do scripting built in from the start.

Posted by bret at 03:47 PM

Making Code Explorable

The ruby program i'm working on is an implementation of mancala, a simple little game. I'm writing it to learn ruby and test-driven development.

The move and capture rules are a little complicated. I needed a test format that was easy enough for me to get a gestalt for the game, yet be easy to implement:

  def test_repeat_turn
    next_move (1, 3)
    assert_board ("\
   5   5   5   0   4   4
 1                       0
   4   4   4   4   4   4
    assert_player (1)

This worked pretty well. Starting with the default board the top player picks up the stones in the pit third from his left. The board is correct and it is still his turn (because his final stone landed in his store).

But i had trouble getting the capture rules right. In other words, i couldn't get this test to pass:

  def test_capture
    setup_board ("\
   4   4   4   4   4   4
 0                       0
   4   4   4   4   4   0
    player_turn (2)
    next_move (2, 2)
    assert_board ("\
   4   4   4   4   4   0
 0                       5
   4   0   5   5   5   0
    assert_player (1)

My program would fail to capture the stones opposite the final (empty) pit. I'd written the code, but there was a bug in it. The first idea that comes to mind is to run the test through a debugger. But i haven't learned the ruby debugger, so i decided to do something different.

Although this format worked well for tests, i also really wanted some way to interact with the code. The mancala games on the web were fun for that and i wanted to get more of a feel for how the code worked. I spent an evening learning ruby/tk, but that wasn't what i wanted. I wanted interaction, but i still wanted to be able to access the code through the command line interface as well (irb).

Finally i decided to refactor what i had (even though i was red). I moved some of the code that was into the test fixture into a new class that contained the state of the board and the current player. I called this class "Setup". I'd also gotten confused by which player was which. At one point they were 0 and 1, then i made them 1 and 2. Finally i realized that they really were top and bottom. I updated the show method to point to who's move it was:

irb> load 'pit.rb'
irb> s = Setup.new
# < Setup:0xa0c2810 @player=0, @board=   4   4   4   4   4   4
 0                       0
   4   4   4   4   4   4
irb(main):003:0> s.show
   4   4   4   4   4   4
 0                       0
   4   4   4   4   4   4

Next move: ^
irb> s.next_move (2); s.show
   5   5   5   5   0   4
 0                       0
   4   4   4   4   4   4

Next move: v
irb> s.next_move (5); s.show
   5   5   5   5   1   5
 0                       1
   4   4   4   4   0   5

Next move: ^
irb> s.next_move (3); s.show
   6   6   6   0   1   5
 1                       1
   5   4   4   4   0   5

Next move: v

It turned out that this was enough to really have a little fun with my code. I could see the board change from move to move. The irb interface makes it easy to repeat a previous command; i'd use this and then just change the pit number. (I could also make changes to the program, reload it, and then continue to play the same game using the new code. This is a great thing about Ruby. I used to do the same thing with Lisp.)

And i found my bug. The capture rule is that if the last stone is placed in an empty pit on your side, you capture the stones opposite. After distributing the stones, my program checked for a capture (current is the current pit):

    if current.owner == player and current.count == 0 
      current.count == 0
      hand = current.opposite.count
      current.opposite.count = 0
      self.sides[player].store.count += hand + 1

Do you see the bug? The problem was that the check should be current.count == 1 since we had just put the final stone in our hand into the pit. It's no longer empty.

Once i had made the code more interactive, it became much easier for me to understand it and see the bug. Originally, i'd thought that having the command line interface would allow me to call different methods, check the state of objects and find out more about what was happening. But that wasn't even necessary. Just the interactivity helped me understand my code better so that the bug was obvious. It also helped that i could change code and do another move immediately, without having to reset.

In test-driven development, it's often perceived as a bad sign when we find ourselves manually testing our software. I agree that you need automated tests, but you also need to be able to interact with the software to get a feel for it.

Posted by bret at 12:05 PM

Update: More Tests than Code

Both Brian Marick and Chad Fowler wrote that they have seen ratios between test and implementation code similar to what i reported the other day. Both are using test-driven development with Ruby.

I also have a report that Windows NT 4.0 had 6 million lines of product code and 12 million lines of test code. That's without test-driven development (as far as i know) and i bet most of the code was in C/C++.

Posted by bret at 11:59 AM

February 26, 2003

More Tests Than Code?

I just took a look at a little project i've been using to learn Ruby. I'm also learning about test-first development. By my count i have 138 lines of "code" and 164 lines of tests. Is is common to see more test code than code code?

Some of the "test" code amounts to implementation in the fixture. Eventually it'll probably be refactored into the main code. But still...

Posted by bret at 09:56 PM

February 25, 2003

Thin GUIs and Scriptability

In this article by Cameron Baird and Kathryn Soraiz, they develop the theme that testability means scriptability because automating GUIs is so hard.

The examples are in Tcl and the writers seem most comfortable working in a Unix environment. Why is it that these ideas seem so natural to people working with Unix and so strange to people working with Windows?

Posted by bret at 10:17 AM

February 24, 2003

Testing for Developers

I got a chance to sit in on Mike Clark's presentation on test-driven development yesterday. He gave it as part of the Complete Programmer's travelling Java conference, which visited Austin over the weekend. After his presentation, we sat down and had a long talk.

Mike really likes test-driven development and is encouraging people to do it whether or not they are using XP. He thinks that it fits with continuous integration for lots of different projects. My only criticism of his talk was that i thought he was a little too deferential regarding the strength's of some of the traditional automated testing tools used by testing groups. My biggest complaint is that they are set up to divide testers from development. They use their own (vendor-specific) language dialects and are priced and licensed to make it prohibitive to allow developers to run the tests. They don't have to be this way; i've criticised these features long before i got involved with XP and agile testing.

Mike and i agreed that it's a shame that testing groups are so often at odds with their developers. And it's a shame that there isn't more communication between their communities.

A question Mike asked me was what developers could learn from the testing tradition, especially those developers like Mike who are genuinely interested in doing good testing. It's a good question.

There are a number of testing techniques that have a long history in the testing literature. I've been studying this lately: both the literature and how they are taught in different classes. These are techniques that go by the names of equivalence class analysis, boundary testing, pairwise testing (aka orthogonal arrays), decision tables, and the list goes on.

I've known and used most of these techniques for a while, although some i've used only rarely. Others are embedded in my basic thinking: i use them without being conscious of the technique per se. The goal of my recent study has been to understand the definition of each so that i could teach them. Some blur into others, which is fine as a practitioner, but a teacher needs to be more precise. And i've also wanted to figure out which are really useful.

One conclusion is that equivalence class analysis is defined differently by different authors. The common idea, however, is that different tests can be seen as in some way redundant if you'd expect them to find the same bugs. So you can eliminate the redundancy. The common examples focus on data inputs, grouping them into valid and invalid "equivalence classes." But the general idea is really that you should eliminate reduncy of tests that target the same potential bugs. This, in fact, is a general schema that all formal testing techniques adhere to. Many simply have a more complicated means for avoiding redundancy.

I find it interesting that Myers (the author of the first book on software testing) distinguishes equivalence class analysis from "error guessing". Either way, it seems to me that you are operating from some notion of where bugs might lie or how they might be distributed in the system.

So my main finding is that the best way to teach testing and develop testing skill is to develop two abilities: (1) to take a known bug and determine what tests could find similar problems elsewhere and (2) to anticipate what kinds of bugs might appear in software. The second is "test anything that could possibly break," but i think there must be some way of developing the ability to imagine what could break.

One problem with an over-focus on traditional testing techniques is that they are designed to find traditional bugs. The kinds of bugs, however, have changed with time. Languages have been designed to avoid some of the problems that used to happen (e.g. strong typing avoids one class of errors). And the increased scale of software development adds new potentials for problems. It's truly sad that some software testers are being certified by their ability to master testing techniques that were designed for finding bugs in COBOL, with no realization of how dated the techniques are.

Posted by bret at 05:39 PM

February 18, 2003

People Who Enjoy Finding Bugs

James Whittaker probably gets more joy out of finding bugs than anyone else in the world. Don't believe me? Check out his video. I can see Brian Marick cringing already.
Posted by bret at 02:14 PM

Simplicity and Almost Working

Brian Marick says:

But, while simplicity is part of the culture of programming, it's not part of the culture of testing. In fact, testers seem to revel in complexity. They believe "the devil is in the details" and see part of their job as finding oversimplification. They especially look for faults of omission, which Bob Glass called "code not complicated enough for the problem".

Whatever agile testing will be, it will mean bringing those two cultures into closer communication and cooperation. Right now, they operate independently enough that the clash of values can mostly be ignored.

Many non-technical people, testers or not, observe that software and computers often oversimplify. It's a fair observation. I was left waiting for nearly an hour for my check recently at a restaurant because the waiter failed to bring me my soup and then needed to get it taken off my bill. Of course it was all computerized. Good testers think of these kinds of situations before the software is deployed.

Roger Needham has said, "Automation is replacing what works with something that almost works, but is faster and cheaper." I think good design acknowledges this and makes sure that the part that "almost works" is something we can live with.

Posted by bret at 11:44 AM

I'm Windows 98

Which OS are You?
Which OS are You?
Posted by bret at 11:25 AM

Why Testing Requires Sturdy Languages

Last night i presented Fit to the XP-Austin user group. This is a framework for data-driven testing developed by Ward Cunningham. I've been reviewing and demoing this code since last fall. Fit is in Java and has been ported to C#, Python and Lisp. The port to C++ has apparently had some difficulties. One of the things that makes Fit a relatively simple solution is it's reliance on advanced object-oriented features -- including reflection. C++ doesn't have that. Could it be ported to Cobol? It might be tough.

Experienced test automators might look at Fit and wonder what's so exciting about it. It's just data-driven testing. I've built similar frameworks for parsing and executing tests in SilkTest, but it wasn't easy. Part of the problem is that SilkTest -- and other tools such as WinRunner -- have weak languages. I've been preaching the problems of "vendorscripts" for some time, but thinking of how you'd port Fit to SilkTest or WinRunner made me truly realize how weak these languages are.

In my automation class i've said that the problem with data-driven testing is that it requires a parser and dispatcher and that the complexity of these can introduce errors that undermine the goals of test automation. I cite specific examples of "silent horrors" that i've seen: coding errors that caused testsuites to fail silently, skipping tests or failing to report signalled errors. What makes Fit different? At first i thought it was just that Ward was a smarter programmer. Now i'm realizing that it's also because it's written in a smart language -- Ward was smart enough not to write it in a weak language.

Posted by bret at 10:39 AM

February 17, 2003

Conference in Austin This Weekend

I'm trying to make arrangements to attend a Java conference in Austin this weekend. Dave Thomas, Mike Clark, Glenn Vanderburg and many others will be speaking. I'm particularly eager to hear Mike Clark on Test-Driven Development and I wouldn't mind hearing what Dave Thomas has to say about my new favorite language: Ruby.
Posted by bret at 05:38 PM

A new kind of test tool

How do you know how your web pages look on different browsers? It's been a hard problem to automate. Even if you can get your tests to run on different browsers, how do you check that they look good on each? The variations are many and often subtle and require human observation.

Here's a testing service that promises to make this easier. You still have to check the pages by eye, but this service goes to the trouble of loading your pages on lots of different browsers and then sends you a screen shot of the results. You don't have to maintain your own testing lab.

Posted by bret at 02:16 PM

February 14, 2003

Reporting Everyday Bugs

I have dozens of notes of various bugs that i've encountered in my everyday work. As a software tester, finding and reporting bugs is my job, but i find a lot that i'm not being paid to find.

Many non-technical people find their share of bugs as well, but they've usually been lead to believe that this is somehow their fault. They often seem to think that people like me -- techies -- run into fewer bugs. In fact, we run into more. We use more software, and often less mature software. I'm going to start logging these bugs in my blog here partly to make this fact clear.

But i also think that there are various lessons that we can learn from the bugs we find. Indeed, i think that good software testers are good at learning lessons from bugs, so that they can better find them.

Anyhow, expect to see more bug reports from my backlog.

Posted by bret at 09:36 AM

Blog Bug

Server www-03 misconfigured

I ran into a bug last night when trying to post my previous blog entry. The blog software is made of a bunch of perl scripts for revising this site. When i tried to access them, i got the following error:

Got an error: Unsupported driver MT::ObjectDriver::DBI::mysql: Can't locate DBI.pm in @INC (@INC contains: /home/w/wazmo/public-web/cgi-bin/extlib /home/w/wazmo/public-web/cgi-bin/lib /usr/lib/perl5/5.6.1/i386-linux /usr/lib/perl5/5.6.1 /usr/lib/perl5/site_perl/5.6.1/i386-linux /usr/lib/perl5/site_perl/5.6.1 /usr/lib/perl5/site_perl/5.6.0 /usr/lib/perl5/site_perl /usr/lib/perl5/vendor_perl/5.6.1/i386-linux /usr/lib/perl5/vendor_perl/5.6.1 /usr/lib/perl5/vendor_perl .) at /home/w/wazmo/public-web/cgi-bin/lib/MT/ObjectDriver/DBI/mysql.pm line 14. BEGIN failed--compilation aborted at /home/w/wazmo/public-web/cgi-bin/lib/MT/ObjectDriver/DBI/mysql.pm line 14. Compilation failed in require at (eval 3) line 1. BEGIN failed--compilation aborted at (eval 3) line 1.

This error showed up on the browser window. I contacted the tech support guy at IO and we went back and forth trying to debug the problem. He thought the problem might be with my script, but i hadn't changed in months and it had been working. So i figured it had to be a problem with their system. The exact error is that DBI.pm isn't in the INC path. But he checked and could see it there.

Then he wondered whether the problem was due to their mirrored servers. They have tandem servers and "www" can actually map to any one of the three. He tested the admin URL with each and found that it worked with www-01 and www-02, but failed with www-03. That gave me a work around and let him know that one of their servers was misconfigured -- probably missing some files.

Posted by bret at 09:27 AM

February 13, 2003

Java Bugs

An internal memo at Sun describes serious problems with their Java implementations. Particularly interesting is that their developers are closing 22% of the bugs as "will not fix."
Posted by bret at 09:57 PM