In "Apgar and Software and Bears, oh my!" I wrote up some initial thoughts in response to Daniel Read's Does Software Needs an Apgar Score? post (Apgar scores are a simple 5 metric assessment of newborn babies that results in a score from 0-10, any baby with a score lower than 4 is treated to some rigorous neonatal care). I've had a little bit more time to think about it (aren't weekends a wonderful thing?), and I wanted to write a little bit more about it. (By the way, Steve Benz posted Apgar, Metrics, and CVS about a different way to think about Apgar scores and improving software development.)
One thing I did (implicitly) was to turn a theoretical 'Software Apgar Score' into a dashboard (sometimes called a balanced scorecard). I've spent some time reading the Robert Kaplan's work on this, and I agree with a lot of what he has to say. (Take a look at The Strategy-Focused Organization or The Balanced Scorecard for more information, straight from the horses mouth.)
The basic idea is that you can (should) create a dashboard for measuring progress, and that that dashboard needs to be aligned with your goals. It's also important that you measure leading and not lagging indicators. The fact is that you need something more than a single dial ('Did we cut a release this week?') and less than a 100 page document. Four to six metrics seems to be about right from everything I've read and tried.
It's the choosing of which ones to measure that can be hard. The metrics are going to be different depending on what you want to do, which makes the task of picking them even harder. Wouldn't it be cool to have a catalog of metrics though, with some tagging to indicate which ones are useful for what kinds of software or organization?
I told Daniel, if he was serious about building something, I'd rather see a deep and broad catalog than a single tool. He had a great response:
Pat, this is exactly what I was thinking of as the next step! I like your term "deep and broad catalog" of software types cross-referenced with score-able attributes. I'm thinking of three key elements for this system:
- a guideline for what are considered ideal qualities for a "score-able attribute"
- a guideline for what constitutes a "software type" (language? context? purpose?)
- a consistent scoring system that will be useful across the variety of software types and score-able attributes (one idea I had was that all programs start off with a score of 100 and then points are taken off for lack of certain things, which sort of reverses the additive nature of the medical Apgar)
A tool to keep track of all this would be nice too. :-) Maybe the simplest place to start would be a wiki, but another option would be to use a taxonomy-driven tool like Drupal, which would be more structured from the start. And a custom tool is always a good option. I'd be happy to host it, but unfortunately my time is limited as far as putting the real grunt work in.
And of course it will be easy to get everyone together and agree on all of these things. Seriously, though, if others are crazy enough to jump in and give this a go, I'm crazy enough to participate. Bob Glass has already agreed to run an article in his Software Practitioner newsletter, and if we had some kind of framework along the lines of what I've described above, with some salient questions for soliciting feedback, then between the article and whatever interest we can generate on the web, it might come together into something useful.
Once the database of software types and score-able attributes is built, I like your idea of a front-end tool for managers, designers, tech leads, developers, consultants, etc. to use to identify the desired "software type" and see what the consensus was on the score-able attributes for that type.
Hi Ruby,
ReplyDeleteNice blog, came across recently as I was doing some balanced scorecard research. Anyhow, I had a few questions about it and was wondering if you'd mind answering them?
Get back to me when you can, thanks!
Best Regards,
Bobby
bobncsu@gmail.com