Wednesday, November 08, 2006

Apgar and Softtware and Bears, Oh My

Daniel Read wrote a great post about Apgar tests and software and Edward G Nilges wrote a response. (Read more about the Apgar test over at wikipedia). Daniel's and Edward's posts are worth reading before you get too much further here, go ahead ... I'll wait.

I think there are some good and bad things underlying the analogy (as there are with most analogies). Let me walk through them, the bad first:

  • The biggest issue is that there are a lot of variances in the way software is written and its success measured. Without some standardization, it will be hard to get a meaningful, universal dashboard (which is what the Apgar really is). That doesn't mean that there couldn't be Apgar-like dashboards for different kinds of projects and organizations — and having those would probably be a good thing.
  • The Apgar is somewhat subjective (gameable, to use a term that showed up frequently in the comments of Daniel's and Edward's posts). It's hard to imagine that a similar dashboard for software wouldn't be gameable as well. (Whether or not a team is gaming the tests is probably a great metric for the health of the software.)
  • The Apgar is a lagging indicator. You don't know the score until the process is over, and you can't do much to go back and fix things. It does help you identify cases where an urgent response is needed, but knowing that things are headed south earlier would be much better.
And then the good:
  • The Apgar isn't just given once, it's given immediately and again at 5 minutes (and several more times if the baby isn't thriving). Being able to apply the same test iteratively, watching for improvement, will help ensure that your corrective action is actually working.
  • The Apgar is simple. As Daniel said, there are five metrics each of which is assigned a score of 0-2, and the total score (0-10) is the single measure that's used. Almost anyone can learn to rate an infant or understand the resulting score. This simplicity would be important for any software dashboard.

So, where do we go from here? Maybe it's time to build a list of what you think is important to a healthy software project. Some things that come to my mind are: a solid test suite, documentation, frequent releases, more than one developer with the commit bit set, and traffic on a project specific mailing list. What are you looking for?


Anonymous said...

First, the Apgar test doesn't test if the baby is healthy. I mean, the baby could have all sorts of problems and diseases, but still pass. It's really a test for a "un-health": If you don't pass, then you need help.

Most people have lept onto the "Software metrics" idea. That's analogous to the 1000's of other tests you do later. You need something quick and simple . I'd start with the Joel test

gnupate said...

good point about the Apgar being a test for how badly you need help.

The Joel test is a reasonable test, if you're a commercial/proprietary shop. It doesn't cover a lot of things that an Agile shop might care about either. If it works for you, great! I don't think it's the right test for me though.