Tuesday, December 30, 2008

Rails and Merb -- Why Merge At All?

In my third round of mini-interviews about the Rails and Merb merger, I've turned to James Britt (@jamesbritt). James is a long time Ruby hacker and a fan of non-Rails web frameworks. If you think the merger is a panacea for the Ruby world, James' answers may give you some food for thought from a different perspective.


In a way, this reminds me of the gcc/egcs merger back in the day. What kinds of benefits do you think the Ruby, Rails, and Merb communities will see from this merger?

JamesWhat kinds of benefits? Very little for Ruby overall, some for the Rails communities. I don't understand the value to Merb.

Major disclaimer: I do some Rails work, but prefer Ramaze (and before that, Nitro). I did Rails for money, and Nitro for fun. For about a year now I've been free to use whatever framework I think best for a task, and that's been Ramaze, and limited time means I've lately not been keeping up on the all the goings on in the various Rails communities.

I did try to get a feel for Merb recently, and found it to be much nicer than what I know of Rails. If I couldn't use Ramaze, Merb looked to be an acceptable alternative. But whether or not I used it, the ideas explored in Merb (as with IOWA, Sinatra, Waves, Camping, and any of the other two-dozen or so Ruby Web frameworks) were of value because everyone could learn from each other and steal the best parts.

Because Merb was essentially (as best I could tell) a variant of Rails (with ideas from Nitro, Ramaze and other frameworks), it seemed easer for ideas to trickle back and forth between Merb and Rails. Rubyists had more options, and both Merb and Rails could feed off of each other. It seemed a win all around to have them independent.

The consolidation removes this useful competition. This is maybe a plus for folks who prefer Rails, and a gain for people who want to do things the Merb way but can now say they are using Rails. Less useful to people who want to see more distinct options.

The competition for the top spot in the Ruby Web Framework space has been good for everyone. What's going to happen now that the two big fish represent one, even bigger, fish?

James The idea that there is a "top spot" in the Ruby Web framework space has not been good for Ruby. There are real benefits to having a large, thriving community behind any tool or framework, but it's also critical to avoid a monoculture. It's better to have many thriving communities and an exchange of ideas.

There is a surprising mood among some Rubyists to want to see a "winner", for there to be just *one* of certain things. There's a cult of personality applied not just to people, but to projects as well.

The fuss over Merb v. Rails was a bit odd because it seemed (to this outsider) that the fight was over Pepsi v. Coke Cherry Zero, when in the back of the Ruby fridge we had mineral water, moonshine, champagne, and iced tea. There are lots of great things going on in Rubyville, things that aren't getting the attention they deserve.

Ruby appealed to me because it seemed to be about choice. One could craft a language on top of the language that suited the task at hand. You were not locked into the The One True Way.

If this merging helps people interested in Merb and Rails, great for them. But if it means less attention to the many other valuable Ruby Web tools, if it means fewer exchanges of ideas, then it's a loss for Rubyists at large.

Where else in Rubyspace do you think this kind of merger would be possible and helpful?

James What would be useful is more thought given to reasonable modularity rather than to more merging. For example, I'd like to be able to pick among Ruby tools for managing project tasks. Rake is the current leader there, though sometimes I want something a bit different.

But I'd also like to be able to reuse individual task definitions from my Rakefiles, even if I'm not using Rake.

Given that Ruby is so flexible, I should be able to move around in tool-space and still have a fair amount of re-use. I should be able to use templates and parsers and task runners and HTTP request routers and ORMS and never feel locked-in to one or another Big Framework. (Rack is a good example of providing mid-level granularity.)

I'd rather see less consolidation, less coupling, less emphasis on "winners" and "leaders", and more focus on solid, pluggable code, crafted and assembled by each of us.

Monday, December 29, 2008

Rails and Merb -- Better Together?

After my quick interview with Yehuda and Kevin the other day. I wanted to post a couple more points of view. David Heinemeier Hansson (@d2h) and Jeremy McAnally (@jeremymcanally) were good enough to weigh in as well.


What kinds of benefits do you think the Rails and Merb communities will see from this merger?

DHH We get to join the best ideas into Rails and make those ideas available to a much larger audience. We also get a bigger, better team for pushing Rails forward.

Jeremy I think this can only be good for the two communities. Bringing Merb's sensibilities to a large, mature codebase will not only open up a lot for the framework technically, but I honestly think we'll see a lot of innovation and, dare I say, *synergy* between the two cultures. There's been a lot of squawking about monoculture this or competition that, but honestly I think we'll see a lot more come out of working together rather than arguing with one another.

I have my own apprehensions about adding someone to the core team who isn't actually *using* Rails every day. One of the biggest wins for Rails was the fact that was and still largely is extracted from working code changes, additions, and extensions from working production code. Adding people to the team who aren't "practitioners" will create a slightly different dynamic I think, but I'm not sure if it'll end up being a good or a bad thing. I trust Yehuda is a very competent Ruby programmer, so I don't doubt that the code will be good, I just think the perspective may not be the same.

The competition for the top spot in the Ruby Web Framework space has been good for everyone. What's going to happen now that the two big fish represent one, even bigger, fish?

Jeremy As for non-Rails web frameworks, I don't think it means much. Frameworks such as Sinatra, Ramaze, Mack, and so on all have something fundamentally different to offer technically, whereas Merb and Rails were very technically similar in their goals. This similarity is why it makes sense to merge Merb and Rails, but it's also why it won't affect the other frameworks.

DHH You should look as competition being much larger than just the Ruby sphere. There are a TON of choices in this world to do web-based software. Everything from .NET to J2EE to PHP. In that picture, Rails is still a comparably small fish.

Where else in Rubyspace do you think this kind of merger would be possible and helpful?

Jeremy I'd like to see the ORM's become a little more singular. I'm hoping something like Active Relation or (a much simpler version of) Sequel or Ambition will create a Rack-like interface for ORM's to hook into so we can have different DSL's on top of a core that's super fast and tight.

I'd also like to see a merge in the realm of these extension libraries. There's Active Support, facets, extlib, this, that, etc. etc. that all do basically the same thing on one scale or another. Ideally we'd have some sort of core library that lazy loads/installs each piece or something so you don't have to have a gigantically huge library but still get the same functionality. Having a ton of hands working on these libraries will speed them up and make them more efficient.

Wednesday, December 24, 2008

You Got your Merb in my Rails

When the news about the Merb/Rails merger broke, I shot off an email to Yehuda Katz (@wycats on Twitter) to get his take on things. Here's what he had to say:


In a way, this reminds me of the gcc/egcs merger back in the day. What kinds of benefits do you think the Rails and Merb communities will see from this merger?

Yehuda The benefit is that Rails becomes a framework that can scale from a single-file, light app, all the way up to a full stack (perhaps even more full stack than Rails is today!). That means that people who were having to struggle to gain acceptance for Merb at their work will be able to leverage Rails' popularity, while also being able to hold onto the reasons they wanted to use Merb in the first place. A win-win.

The Rails community will get all the benefits of Merb in their favorite framework. Faster performance, a public API (which means plugins that break less), the ability to use other ORMs, and the list goes on and on.

As a whole, the community gets more smart people working with each other instead of against each other. This should help grow Ruby even more. I am absolutely convinced that this is the year we put away the myths of Ruby's so-called inadequacies by scientifically disproving them one by one.

The competition for the top spot in the Ruby Web Framework space has been good for everyone. What's going to happen now that the two big fish represent one, even bigger, fish?

Yehuda The competition has yielded results! Merb has proven out a bunch of ideas that the Rails core team now wholeheartedly embraces. That's only a win-win for everyone. Now that Rails will become more like Merb, it'd be great if another framework, representing a different community with different interests did exactly the same thing. Forcing Rails to acknowledge the needs of a community of people by proving those ideas out in code worked here, and it will work again.

Where else in Rubyspace do you think this kind of merger would be possible and helpful?

Yehuda The proliferation of rspec-like frameworks has gotten a little bit out of control. I'd love to see some combined efforts there. Dave Chelimsky, who runs rspec, has said over and over that he's in favor of cleaning up the core, which seems to be the big objection that people have that causes them to go make yet another speccing kit.

Hopefully future successes here will show people that it's possible to put aside differences and build something great!


I also asked Kevin Clark (@kevinclark on Twitter) for his take on the merger

Kevin I think the Rails/Merb merger has potential to be good and bad for the community. A single API for a unified set of tools and plugins is a good thing, clearly, and makes it more attractive to develop software in and (maybe more interestingly) for the Ruby web space. I feel Merb got a lot of things right when reconsidering how a web framework should be built, and Rails will benefit from that. Merb at the same time is going to be wrapped up into the Rails ecosystem that's churning out documentation and libraries at a sometimes dizzying pace. A bigger community will help smooth the edges that are still sometimes rough around Merb development.

The big thing that worries me is the competition aspect of the merger. Yes, sometimes the rivalry was silly and overly dramatic, but each group pushed the other to improve and pay attention to alternate lines of though and provide something better. I hope that discourse continues publicly. The worst thing to come of this would be that debates that formerly happened in the open, and produced new insights in the open, are hidden away in rails core.

One thing to consider is that this is exactly the sort of behavior Dave Thomas encouraged in his keynote at RubyConf. If we look at Merb as an experimental fork of rails (yes, this is horridly simplified) that gained popularity and was eventually folded back into the mainline, it exemplifies what Dave thinks we need to start doing with Ruby. So, I'd encourage more forking rather than merging. Even smaller libraries benefit. Just look at the Nokogiri/Hpricot arms race.

Tuesday, December 23, 2008

Author Interview: Real World Haskell

Over the last couple of weeks, I've traded emails with Bryan O'Sullivan (@bos31337 on Twitter),Don Stewart (@donsbot on Twitter), and John Goerzen (@jgoerzen on Twitter), the authors of Real World Haskell. We've talked about their book, why Haskell matters, and why you shouldn't be afraid of monads (at least in RWH).

By the way, this book is another case of O'Reilly doing the right thing. Real World Haskell is available online at book.realworldhaskell.org/read/. The online book includes a reader comment tool which will help the book continue to improve. Thanks O'Reilly for making things like this happen.


How real world is Haskell?

Bryan The tools and libraries around Haskell have increased greatly in both number, robustness, and capability over the past few years. There has been a concerted push within the community to make it easier to develop and deploy new code. As a result, Haskell is in the enviable position of becoming a "stealth" real world language, where a team that adopts it can hope to steal a march on people using less productive languages.

Don The Haskell language, toolchain and libraries have really converged on "industrial strength" over the last few years. The Haskell community, built from three pillars: the academics who designed the language, the open source community driving production of code, and the industrial users applying the result and feeding back new ideas, is really buzzing. Drop by the #haskell IRC channel to get a feel for the energy and excitement around the language.

To get a feel for the growth around the language, you only need to hang out at CUFP (the Commercial Users of FP workshop), where we have people talking about their day jobs doing hardware design in Haskell, controlling hydraulic equipment, running financial infrastructure, deploying web apps, writing high assurance systems and more.

To complement this growth in industrial use, the open source community has been churning out new libraries and tools. Almost a thousand Haskell packages have appeared on Hackage — Haskell's CPAN — in the last 20 months. From delicious, flickr and twitter bindings, to Perl compilers, p2p network systems and filesystems. The growth and diversity of code produced has been amazing.

John Very. I am using it at work at a lawn mower manufacturer. Haskell has been involved in innovation in distributed version control systems (darcs) and Perl 6 as well.

I directly am using it for database and web work, LDAP tools, parsing and data conversion, and just about any other routine programming task I might have. I am very happy with the Haskell programs in our setup at work, and have a greater confidence in their reliability than I have in the tools we've implemented with other languages.

LDAP, data munging, web stuff ... these sound like some of the grotty, sys-admin things that I tend to do, but a lot of sys admins are going to be scared away by CSisms like 'type inference', and 'functional programming', or worse 'eschewing side-effects' (yeah, the last one's not your fault). Are sys admins a reasonable target audience for your book? If so, how can they get the most out of it without being impaled on a monad or something?

John This question gets to the heart of what RWH is about — bridging that gap. The preface to the book really gets to the heart of it: why Haskell is interesting for sysadmins, what it has to offer them, and what benefits they can expect by learning it.

Some of these things aren't terribly hard concepts. We can explain type inference as simply "the compiler figures out what the types are by how they're used". Monads are powerful, and might be scary because they've been presented that way. But do get Real Work done, you don't really have to care about them much (though they can make your life easier if you do).

I really think that laziness is the hardest concept to grasp. People are so used to telling a computer, "do task x, then task y, then task z if both of them worked." Haskell programs read more like, "OK, computer. If you ever need to do task x, here's how. You figure out if you need to do it." This is obviously a grotesque oversimplification of both perspectives, but my point is that it's a huge mindset shift. But it makes some hard problems expressible quite easily. We talk about functions in Haskell — really — in terms of whether or not they can work with infinite lists.

So, what we do, is show people. Not just "laziness does foo", but we write code to demonstrate it. Real, working code that compiles and runs and solves real problems. We invite comparisons to other languages, and explain our approach.

You don't have to have a CS degree to be a programmer, and Haskell is no exception. I maintain that it's not harder, just so different that people that have spent a lot of time in the imperative, non-FP world have a few new ideas to soak up.

And really, that's the best part of Haskell, in my opinion. Even if you don't use it, it's a mind-blowing language. I think that it would be truly hard to learn all about Haskell and then go back to using Java or something.

So to answer your question: Haskell has a lot of utility to a sysadmin, and that's my primary use for it, and yes sysadmins are a reasonable target for this book.

Don Well, John's a sys admin, so he obviously finds Haskell entirely suitable. But I was pondering this question, and asked our guru administrator, Paul Heinlein, at Galois why he was reading RWH, and what he hoped to get from it:

There are times when I'll write something in, say, Perl that requires several CPAN modules, some of which may be pretty non-standard. Moving that code to a different machine forces me to install those modules yet again (and perhaps again and again).

I've often yearned for a language with a compiler than can produce real system binaries (not something that requires a runtime system like Java) that I can ship to various systems. The language I've had in mind would work at a higher level than C, be available on Linux/Unix platforms, have decent library support (ala CPAN), and have an accessible developer/user community.

Haskell looks like it might fit that bill. I won't know for sure until I've done a lot more work, but it's certainly looks worth the effort.

So, portability, native code, high level. The same reasons Linspire used Haskell for their distro scripts.

Bryan's comment that 'a team that adopts [Haskell] can hope to steal a march on people using less productive languages' makes me think of Paul Graham and Lisp. Are we going to be seeing "On Haskell" on the shelf at our local bookseller in 10 years?

John I think it's safe to say that there will be more Haskell books in the future as the language's popularity continues to expand. I haven't read "On Lisp", so I can't comment on that specifically.

Bryan I haven't read Graham's book (though I've done a lot of Lisp hacking), but from skimming the blurb, it sounds quite similar to our own. I know that more Haskell books will be appearing in the years to come, and I hope that they will span the range from the practical to the highly advanced.

What have you done to get past the 'ivory tower' feel that so many books, tutorials, and blogs about haskell have?

Don We've focused on the "in the trenches" category of programming tasks we all do in Haskell: hacking databases, network programming, parsing binary protocols, writing web apps, designing GUIs, and writing parallel and concurrent programs. The kind of stuff we do in Haskell in our day jobs, but where the techniques hadn't been gathered in a single place, until now.

We're kind of spoilt as authors: we got to take all that "ivory tower" research work on new language and library ideas, filter it through the open source community, and apply it to make the daily programming tasks easier and more fun.

Bryan We grounded all of our examples in code that people could download and execute, and our examples perform realistic tasks from our own experience. It's also important that none of us is a professional academic; we never ran a risk of being removed from the concerns of our fellow practical programmers.

John I still remember when I learned Haskell, and how I was frustrated that it took so long to get to I/O in some tutorials. It seemed as if people were afraid of it, or thought it unimportant.

I approached this topic from the perspective that I/O in Haskell is beautiful and powerful, that it is critical to real world use, and that we have to cover it well and pervasively. We introduce I/O in chapter 1, dedicate chapter 7 to it, and have several more chapters that deal with it extensively, directly or indirectly.

The other thing is showing people how people in business, open source, and at home, are using Haskell to solve problems. We tried to include as many full examples as we could: programs that compile, run, and do something useful. In RWH, we have things such as a barcode image processor and a podcast aggregator.

In which problem domains should programmers be looking to haskell for a better way of doing things?

Bryan It might be more helpful to turn the premise of this question around: there are just a few domains to which Haskell is not well suited, notably embedded systems and those demanding real time response. And there are remain some areas where library support is essential to making progress, for which Haskell libraries have not yet been built and would require a substantial investment of time. But for a large number of application domains, Haskell is already a good enough language for fast-turnaround productive use.

Don Four main themes come up when you look at why people are using Haskell:

  1. They have hard problems
  2. They care about correctness and robustness
  3. They want serious performance, while still working with a high level language
  4. They want to program their multicore system

Haskell's a rare beast, combining the safety of languages like Ada, with the high level abstractions of languages like Python, Ruby or Lisp (or even higher levels of abstraction..), but yet compiled down to efficient native code that competes with C. And it may be unique as the only language with those properties also targetting multicores *now*, out of the box.

Perl 6 is a great example of a "hard problem" that got easier once they started using Haskell. No on e had produced an implementation of perl6, until Audrey Tang decided to spec it out while learning Haskell. The result was Pugs, the first working Perl 6 implementation, and still the most sophisticated and feature complete implementation.

Another nice example is IVU Traffic Technologies' solver for "roster compliance" of EU bus timetabling systems. Apparently, EU bus regulations are NP-hard to comply with! So they wrote a constraint solver in Haskell to verify compliance of bus timetables, which they call from their existing C++ and Java code, ensuring that regional bus routes in the EU are on time and efficient.

Galois, where Don works, uses Haskell to design solutions to hard problems in the design of trustworthy critical systems. They use Haskell for almost all of their projects, because of the ability to combine high level language productivity, with low level performance (for things like file systems, OS kernels and network stacks), while crucially retaining the ability to build correct-by-design software, and to employ formal methods to prove correctness.

So, if you care about productivity, want to avoid bugs, need performance but want to work with a high level language, or have a multicore system, then Haskell might be a sensible choice.

John There are a great number of problem domains where it could be useful — it's a general-purpose language now, and no longer restricted to a niche.

I think of it as, in some ways, the next natural evolution of a language such as Python. Python is a great language for writing code fast, but it has some drawbacks: its lack of static typing means that type errors won't be caught until runtime, for instance.

Haskell is quick to code in, and its type inference means that type errors can be caught at compile time, even without you having to pepper code with type declarations.

But it goes deeper than the type system. The functional approach to programming has a lot of benefits, and Haskell really turns it into a modern force in programming.

So Haskell gives you speed of development, safety, and a useful paradigm. This is all good news for real world things: web sites, database tools, and the like. Nobody wants to see an exception on their website or have a critical data exchange tool crash.

What is it about Haskell that drew you to the language?

Bryan I became interested in functional programming in the late 1980s, and discovered Haskell around 1992. For me, the beauty and rigour of functional programming posed an irresistible challenge. It hasn't hurt at all that these properties lead to the development of clearer, more expressive code.

John I had been using Python for a number of years as my main "go to" language. Every so often, I like to learn a new language. I had tried OCaml, and was dissatisfied with a number of things in it (such as its I/O system). Plus it wasn't different enough for me. So I decided that Haskell looked interesting, and dove in.

So I got involved with Haskell simply because it was different than anything else I'd used.

What do you think the Haskell community can learn from other language communities?

John I'm probably not well-equipped to answer here, since I have never been deeply involved with any other language community. The Haskell community is an incredibly diverse, friendly, and intelligent place. Tools such as Hackage are clearly inspired by CPAN/CTAN.

Haskell-cafe is an amazing list in that you'll see posts such as CS Ph.D's writing about type theory to people discussing ODBC databases right next to each other. I've never seen anything like that on any other language list, and I hope we can maintain it.

Bryan Honestly, the Haskell community is in an enviable position compared to every other language I know. Between the vast amount of intellectual energy and the friendliness that people exhibit, with a community size that's big enough to get important things done but small enough that people know each other personally, I can't think of any other language that's in a better spot right now.

Monday, December 22, 2008

Author Interview: Relax with CouchDB

O'Reilly is at it again, getting an open content book out there to cover an emerging technology. This time it's Relax with CouchDB by Chris Anderson (@jchris on twitter), Jan Lehnardt (@janl on twitter), and Noah Slater (@nslater on twitter) -- the book is also available as a rough cut, if you'd like to support O'Reilly and the book.

Last spring, Jan came out to MountainWest RubyConf to talk about CouchDB, and I've been interested ever since. Now that the book was on the way, I asked Chris, Jan, and Noah to sit down for a quick interview. Here's how it went:


I know this is the title of your first chapter, but I feel like we should start things out with it here too: Why CouchDB?

Noah I'm a bit of a hypertext fetishist. Serving up documents via HTTP is great, but I became interested in CouchDB because it lets me serve up documents, and then lets me edit them too! There are a few other technologies, like WebDAV, that let you do that over HTTP, but here is this amazingly elegant solution that doesn't require protocol extensions or any other annoying cruft.

Of course, CouchDB also happens to be a very powerful database! A document database isn't for everyone, but if your application revolves around organising and serving up documents, CouchDB hits a real sweet spot.

Chris I'm pumped about CouchDB because it has the potential to make a bunch of new freedom. It's replication feature make offline databases at least as powerful as hosted data. Because everyone already knows the API (whether they've heard of it or not) writing apps on it is (will be) child's play. I say "will be" because right now we're on the edge of it. CouchDB is not yet 1.0 so we are still learning how to write apps against it. I am focussed on finding the simplest path from user to documents. Ajax seems to be the answer.

Jan I gave a few presentations on CouchDB. In the introduction, I usually ask who has built database-backed web applications. I usually get 80-90% hands. I follow up with "... and who likes it and doesn't have any issues with the database?". Most hands go down and I get a few laughs.

This is a nice skit to start a presentation with and win an audiences sympathies, but it is also giving RDBMSes a bad reputation, which they do not really deserve.

Today, relational databases are used everywhere on the web (where I'm coming from). For a long time they were the only sensible choice to solve a lot of the problems you get with concurrent access to your data in a high-traffic manner. But as demand grows, relational databases begin to be used in ways they were not intended to. JOINs are broken up, data is de-normalized. Disk space and insert-speed is sacrificed for concurrent read-speed etc. They are no longer the ideal technological choice.

CouchDB is built for the web scenario. Storing huge amounts of semi-structured data is the default case in CouchDB. It makes excellent use of modern multi-core machines and multi-machine setup. CouchDB views are built using Map/Reduce, the concept that made Google. Replication allows a user to take data offline and work with it locally, without a network connection; and it can be used to synchronize machines in a load-balanced or highly-available setup or both. And the HTTP REST API makes talking to CouchDB as easy as opening a browser.

And it is written in Erlang which gives us high concurrency (20k requests per second on a single machine?), fault tolerance and live code-upgrades. Erlang is worth an entire interview in itself.

Oh hey, that isn't exactly a concise answer and I didn't even tell you half of the good stuff :-)

You're approaching this book in a very open fashion (the book is online and available as you write it). How did you convince O'Reilly to go that way?

Jan We are developing CouchDB. Writing documentation comes with development. We figured that if we have to do the writing anyway, we don't want to do it twice.

O'Reilly is pretty open about (heh) open books. They have the Subversion book that everybody knows and recently released a book on Haskell. They have experience doing this kind of thing and they have seen good results in the past. When we asked about the possibilities, they just said "sure".

Chris As Jan says, it wasn't hard. O'Reilly understands the value of open information, and the value proposition around publishing, which is part of why they've done so much to enter the wider arena of sharing ideas.

JanNoah ultimately pushed us in the direction of writing the book in the open. Once decided, I couldn't see it done any other way and I am glad for the pushing.

Noah We had been talking about doing a book for a while and I was always very adamant that whatever we did would be released under a free license.

The biggest deficiency in free operating systems is not in the software—it is the lack of good free manuals that we can include in these systems. Many of our most important programs do not come with full manuals. Documentation is an essential part of any software package; when an important free software package does not come with a free manual, that is a major gap. We have many such gaps today.
   —Free Software and Free Manuals, Richard Stallman

Since Stallman wrote this essay, the technology publishers have started to wake up a little bit. A growing number of manuals and books are being released under free licenses, and this is absolutely marvellous. O'Reilly is really leading the way on this, so we were very lucky to get a deal with them.

Our editor told us a surprising rule of thumb, that releasing a good book under a free license makes it sell more copies, and releasing a bad book under a free license makes it sell less copies. Let's hope that we are the former!

What benefits do you expect from this approach?

Chris For me, the hardest part is knowing what people just learning CouchDB will need to know. The authors mostly communicate with people who already understand and use CouchDB, but I'd like the book to be capable of drawing you in, even if you're new, so that you feel comfortable building Couch apps.

Noah I am very lucky to be writing the book with two people a lot smarter than me. When I make mistakes or write something a little silly, I get corrected. Similarly, when you develop something in the open, you get all these really bright people reading your stuff and picking you up on things that you've missed or didn't think about properly. Open collaboration is hugely beneficial like that.

Jan That the world fixes our typos. Oh wait, no! Actually, that's what our editor will take care of. You saw in my answer to the "Why CouchDB" question that it is not a one-liner. Add a very diverse potential readership and you get even more ways to put things We can't cater everybody, but we are trying make this book worth a read for a lot of people, from developers to administrators, system architects and hardcore RDBMS lovers.

By opening the writing process, we get decent feedback very early on and as a result will produce a much better book. Besides, we have been open source developers since forever, we just couldn't do it any other way. Release early, release often.

So far there's been a lot of feedback on the book's mailing list. How is that affecting your writing process?

Jan In early drafts you know that there are passages that try to explain an idea and that these passages don't make a good job. If you know the topic at hand pretty well, you might not notice immediately. Public feedback will tell you pretty quickly what works and what doesn't and that helps with the review proces. The writing process, at least mine, is not really affected.

Noah I think the most significant thing I have taken away from this process is the level of confusion caused when trying to compare CouchDB with traditional databases. I mean, fundamentally, this comparison makes about as much sense as comparing cheese and bread. Sure, you can eat both of them, and one may be applicable in one situation and not in the other, but they also taste really nice when you put them together. That's kind of how I see the CouchDB versus RDBMS debate. Meaningless, essentially. CouchDB isn't a panacea, nor are relational databases, and both of them have their uses. I think we're going to work on improving this clarification in the book.

Having said all that, we've only been taking comments for five days now and the response has been just overwhelming. Who knows what other issues, or points of confusion we're going to find along the journey? I'm pretty sure whatever happens, it's going to be just as rewarding. Thanks for the input everyone!

Chris I've been happy about how people are reacting to the figures. I basically hand-sketched some basic drafts of technical drawings for the book. I was thinking, "what's the simplest thing that could possibly work?" and some of the drawings were even captured with the iPhone built-in camera.

I was surprised because people seemed to like the hand-made feel. I've started refining that style, with the rough drafts for the code-example sections. I'm a big fan of the Head Rush books, so I'm hoping that when O'Reilly brings their artists in, they can approach it with the same playful style.

You've posted three chapters of the book already. What should we be looking for next, and how soon should we be seeing it?

Noah We're aiming to submit another six chapters to O'Reilly in early January, so you should see things trickle through to the website as we complete that process.

Jan We are aiming some time in January for the next batch. We are trying to do the book sequentially and the ToC is online already, so you can see what's coming :o)

Chris We're working on a description of the example blog application we'll be releasing. The app right now is pure-Ajax, but by the time CouchDB hits 1.0, we'll be able to do major portions of it without JavaScript at all. CouchDB it getting HTML generation capabilities so expect that to change some of how we build applications.

What kinds of projects should developers be looking at CouchDB for?

JanAfter my talks or in discussions with developers I often get asked what "niche" CouchDB fits in. I usually say "The Web" and that's me trying to be funny again because the web is nowhere near a niche.

Everything that stores messy data that users (no offence) submit (Facebook, flickr, YouTube etc.) and need efficient access to that data. Systems that handle document structures (CMSes, Blogs ... ). Situations where offline-work is preferable. The internet is available everywhere, except when the Wifi is not working or the hotel charges $5 a minute. Not using The Net to get to your data is usually faster, too.

Finally, database are a fairly boring topic (among non-database nerds). CouchDB makes databases fun again.

Chris Yes and +1 to that, Jan!

Look at replication, think about the new opportunities it opens up. Think about applications with the power of the desktop (and location) but written in the language of the web. HTML, JavaScript, REST, JSON, these have become essentially the lowest-common denominator for web services. When offline mode is not a service downgrade, and the source code is at the desktop, people have new affordances. I'm hoping Couch apps become the new Excel macro.

While CouchDB is written in Erlang, there are a lot of libraries to use it from other languages. Which ones have you dealt with? What's good and bad about them? Which ones will show up in your book?

Chris We're concentrating on the HTTP API for the book, so most of our examples are written in JavaScript or Curl. However, there are good libraries in most languages these days, so to the extent a language handles dynamic JSON-like objects well (and can access HTTP) it's a good fit for CouchDB.

Jan I think I wrote three PHP libraries and only one didn't completely suck, but that were the early days. There is the excellent (documented, unit-tested) PHPillow that has been extracted from a real-world project, which is always a good thing.

I used couchdb-python by our very own contributor Christopher Lenz for a number of projects now and it is very solid. If you are more of a Twisted person (heh), David Reid hosts Paisley.

Good and bad? CouchDB introduces new paradigms and we are still in the process of finding out what client-library abstractions work best (hint, ActiveRecord doesn't work well at all). Like all pioneering, this includes some stabbing in the dark. But in general, the libs I've seen are at least well suited for a single job or class of jobs. Maybe there is no one library to rule them all. We'll find out.

If you want to get into CouchDB (and Erlang) what's the best way to learn?

Jan For CouchDB, you don't need to know any Erlang. If you are familiar with the web, you are good to go. Start out by reading through the wiki and of course our book (*cough*). For more specific and tutorial style documentation, check Planet Couch, it aggregates CouchDB related blog posts and includes already a wealth of information.

If you still want to get into Erlang, there is Thinking in Erlang 30-page free PDF that gives you rough overview. For deeper diving, Joe Armstrong, Erlang's inventor, wrote an excellent book Programming Erlang which is a fun read and packed with all you need to set out writing your first (and second) applications.

Chris I learned Erlang by working my way into CouchDB's source code, based on my experience with the HTTP API. The HTTP callbacks are easy to find, and since you already know what they do, its a good way to learn Erlang. There are plenty of ways to get into CouchDB without touching Erlang — we're working on a feature that allows you to write arbitrary controller logic in JavaScript, so really, you can customize CouchDB, as well as use it, by programming any language that understands JSON.

Friday, December 19, 2008

Reading List for the Holidays

I'm not sure I can call the next two weeks the holidays since I'm going to be spending a great deal of time at work, but I hope to get some serious reading in while I'm at it. Here's what's on list.

Hopefully, I'll be chock full of good ideas and armed with new tools in 2009.

What are you reading over the next couple of weeks?

Wednesday, December 10, 2008

First Post: VeloSLC: Getting the Ball Rolling

I attended several LISA conferences back in the 90s, and was a member of SAGE for a while, but sort of slid away from the whole thing when I took a turn in the ranks of IT management. When I went back to systems/infrastructure admin/engineering, I never really got back into it.

Then, I read a post by Luke Kanies which suggested that LISA was stagnating and recommended Velocity as a better alternative.

Now, I fondly remember the time I spent at LISA, and the things I learned at the various conferences, but a conference devoted to web performance and optimization seems a lot more focused on what I do. Luke's points that the Velocity has more of a community feel make me want to get on board. So, how best to do that?

I like to think of myself as a community hacker. I helped start the very first Ruby Brigade, I've helped put together Perl Monger groups, and I've helped organize the MountainWest RubyConf. So what would be more natural than trying to start a local Velocity minded group — VeloSLC.

Let's start with a mailing list. As we build to critical mass, we should talk about getting together, maybe over lunch, or one evening a month. If we can develop the interest, a local 1 day conference wouldn't be out of the question. It all needs to start with a conversation though, so let's get the ball rolling.

Tuesday, December 09, 2008

Apprenticeship Patterns Interview with Dave Hoover and Adewale Oshineye

Dave Hoover and Adewale Oshieye are working on a book called Apprenticeship Patterns, which is due out in out in March of 2009. The book is about learning how to learn, and pushing yourself along the path toward a higher level of craftsmanship. While it's not Ruby-centric, I think it's the kind of book that will be widely read within Ruby circles. To get a jump on things, I've started an interview with Dave and Ade. Here's the first part:


Why don't you start out by telling us a bit about yourselves

DaveSure. So, I was a practicing child and family therapist until 2000 when I wanted to try my hand at programming. I learned Perl and was a Perl/CGI developer until 2002 when I "discovered" a bunch of different things like Jerry Weinberg, Extreme Programming, Ruby, and Java. Agile and Java were in their hayday so focusing on these helped me get a job at ThoughtWorks in 2004. My 2 years at ThoughtWorks was a great experience, it was during that time (2005) that Ade and I met (via email) and started writing the book. In 2006 I left ThoughtWorks to join smaller, more locally-focused consultancy, Obtiva. I'm now a principal of Obtiva, where I pioneered our Ruby competency, and founded our Software Studio practice.

AdeI started programming back in the 8-bit days using Basic and 6502 assembler. I then took a long detour into Political Theory and journalism until the late 1990s. I studied for an MSc and started working at a dotcom consultancy. In 2000 I stumbled upon Lightweight Methods (which is what the Agile movement used to call itself before the invention of the Agile brand) and through that the Pragmatic Programmer book. That one book introduced me to Ruby, Ward Cunningham's C2 wiki and the Extreme Tuesday Club.

At the time the Extreme Tuesday Club was Europe's epicenter for all things related to Agile, TDD and Mock Objects. We would meet up in a central London pub and people like Martin Fowler, Erich Gamma and Kent Beck would just drop by for a pint.

I eventually joined ThoughtWorks because back then they were recruiting all these really fantastic people and there was no way I was going to turn down the chance to work with people like Aslak, Nat Pryce, Obie, etc. The funny thing is that I think I'd already bumped into Dave on bookshelved.org by the time he joined ThoughtWorks. I have this vague memory of sending him an email in his first week at the company and saying something like "are you the Dave Hoover from bookshelved?"

Up until I joined Google I'd spent my entire career at various different kinds of consultancies across western Europe. I'm now a Technical Solutions Engineer which basically means that when it comes to the company's big strategic partners my group are responsible for everything from javascript problems right down to broken trans-Atlantic cables. The best part of the job is that I get to work with people across the entire company. As you can imagine this is a really good way to learn humility.

What are these apprenticeship patterns you're writing about, and why are they so important?

Dave These apprenticeship patterns were originally extracted from my experiences teaching myself how to become a software developer. I switched careers when I was 26 and taught myself Perl and JavaScript, and then Java and Ruby, and I've played with Scheme and Erlang. But the patterns aren't about any specific languages or technology, and they're not design patterns. These are behavioral patterns. For example, the first pattern I wrote was "Be the Worst", which was based on a quote from guitarist, Pat Metheny, who advised new musicians to try to always be the worst guy in their band.1 This obviously doesn't mean to try to play poorly, it means to surround yourself with great musicians. When I read that quote in Chris Morris' weblog, it immediately resonated with me because it helped me describe why I had joined ThoughtWorks the previous year. I wanted to work with people who were better programmers than me, it's one of the best ways to learn.

I believe these patterns are important because they have a strong theme of staying technical, of being dedicated to one's craft, and pursuing the mastery of that craft. There are too many talented, yet short-sighted, developers who grab their first management opportunity and end up hating their job just a few years later. These patterns are important because we don't need a bunch more programmer-turned-manager people, we need a bunch more insanely experienced, talented software craftsmen that can wield their vast knowledge and experience to create innovative, robust software, and shape the next generation of software developers.

Ade These apprenticeship patterns are about capturing the common techniques that experienced people have used to build their careers and making them available to newcomers to the software industry. It turns out that a lot of the common techniques are grounded in certain values e.g. feedback is important, take responsibility for your career and fate, learn by working with people who know what they're doing on real projects, share what little you know in other to build a community, etc.

These patterns are important because they make it more likely that someone starting out today will be able to surpass their elders. If every software developer doesn't have to start their career from scratch and reinvent every technique that skilled people have been using for decades then it becomes reasonable to talk about the software industry moving forwards.

We're not claiming that this wisdom is ours. We've merely followed in the footsteps of books like the The Pragmatic Programmer and Software Craftsmanship. These are books that people keep telling us changed their lives. We're not quite that ambitious. But hopefully if we introduce new software developers to ideas like software craftsmanship and show them the most powerful ideas we know of then those people can use them to change their lives.

Dave,you mentioned the pattern "Be the Worst". One problem with "being the worst" is that there is always a senior member of any team. How do those people find ways to be the worst?

Dave Remember, these are apprenticeship patterns. They're for people who identify themselves as apprentices. So, if you don't identify yourself as an apprentice, then this pattern is not appropriate for you, instead you should be looking for opportunities to mentor more junior programmers. But if you do identify yourself as an apprentice, and you find yourself leading a team, and this is not uncommon for talented apprentices, then it's time to consider a couple apprenticeship patterns. First, "Use Your Title", meaning: if you've been programming for 2 years and your title is Senior Application Developer, then use that title to gauge your organization. Chances are your organization is sub-par if someone with so little experience could rise so quickly. Sure, you may be a natural, and you may be insanely smart, but if your learning is slowing and you're the best on your team, it's the right time to "Be the Worst", and look for a new team to join.

How does this compare to the more traditional kind of apprenticeship practised at Role Model Software?

Ade Sadly I've never experienced a proper apprenticeship or studio environment. The nearest I've got has been working with some of the stellar folks at ThoughtWorks. Dave's a much better person to handle this question. i will just mention that one of the goals of the book is to provide guidance for the vast majority of us who aren't lucky enough to get that kind of explicit mentoring.

Dave These patterns are different from the kind of apprenticeship that was (is?) practiced at RoleModel. That's because these patterns were extracted from the experiences of people who weren't in traditional apprenticeships. These patterns are going to be most useful for newcomers to software development who find themselves in less than ideal circumstances, the sort of circumstances that keep Dilbert in business. For example, one of the patterns is "Nurture Your Passion" and is written for the enthusiastic developer who finds himself in demoralizing situations at work and needs to the take the time to keep his passion for the craft alive. That pattern would be more relevant for an inexperienced developer working in a large, inept IT organization than an apprentice at RoleModel.

For those of us team leaders, project managers, and development directors who have the power to create more ideal circumstances for apprenticeship, Pete McBreen's Software Craftsmanship is a more relevant book. Reading about the ideal scenarios in Pete's book inspired me to scrape my own informal apprenticeship together out of non-ideal circumstances. And then I used Pete's ideas, combined with my own, to create an apprenticeship program similar to RoleModel's in Obtiva's Software Studio.

How well are these patterns going to transfer to the technical non-programmer (say, a sys admin like me)?

Dave I would think they would transfer well to system administrators, database administrators and web designers. I know that my editor at O'Reilly, who is non-technical, found some of the patterns useful in her own career. The patterns are mainly about learning, attitude and motivation, so the details that are specific to programmers shouldn't be too much of a hindrance.

Ade I think you will find that these patterns are very useful to someone in your position. A lot of the ideas in the book are drawn from a wide range of sources and expressed through patterns. For instance we cite Twyla Tharp ( a dancer turned choreographer) and Atul Gawande (a surgeon) qute often because they're highly skilled practitioners of their disciplines who have written about the path they took.

I would probably quibble with your description of yourself. You're more than your job title. The best sys-admins I've ever known were people who spent time reflecting deeply to find trends in the problems they were fixing, created feedback loops, dug deep below the surface of issues/systems and did many other things that could have been taken straight from our book. I suspect that technical people in the software industry have far more in common with each other than our arbitrary job titles would lead us to believe.

You will probably find our perspective (as developers) a bit different to your perspective. However you should find that the patterns are useful to you precisely because they talk about things you're already doing or planning to do. The example implementations of the patterns, which we've called Action sections, might not be as relevant to you though.

In fact we tackle this issue in the preface. The preface points out that patterns are meant to be a mapping from a family of problems to a family of solutions. The way you apply this mapping will depend on your current context. Even though we assume that the majority of our readers will be full-time software developers the fact that you're a sys admin or a DBA is just a slight modification of this context. I look forward to seeing how that affects the solutions generated by your usage of the patterns.

Ade, I liked your reminder that developers and sys-admins/DBAs have more in common than we often think. It reminded me of a blog post by Andrew Shafer. How do you think we can best build on that commonality?

AdeI found that blog post about the tribal nature of the software industry to be very insightful. Unfortunately we tend to have teams who don't maintain the systems they build and who feel that production support is somebody else's job. As the systems we all build/maintain/administer/support start to handle ever more data, need to scale better and users develop expectations based on the consumer-facing web-apps they use we're not going to have the luxury of that kind of horizontal specialisation. We're all going to need to develop a more thorough sense of the stack on which we're building rather than just throwing code over a wall.

The way we can build on this commonality is by arranging conferences that span the tribes so that we can start to realise how much value other people's skills have. Those of us who don't arrange conferences can do our bit to help by trying to go to at least one conference a year which is filled with people who have an 'alien' but effective perspective. Find a community of web designers, embedded sensor network programmers, unix sys admins, kernel hackers, etc and just lurk at one of their conferences.

Soaking up the ideas that other groups of highly skilled people care about can bridge the divide but it can also give you valuable new insights into your own work. For instance I use the techniques in Tom Limoncelli's book Time Management for System Administrators to help me manage my day. It really helped me when systems like GTD didn't seem to fit the way I live or work.

Andy Hunt's new Pragmatic Thinking & Learning book talks a lot about the Dreyfuss model and learning to push your way up it. Where does self-imposed apprenticeship fit in that process?

Dave I still need to read all the way through Andy's book. I had to put it off so I could finish mine, but I've heard rave reviews about it. I honestly don't know anything about the Dreyfuss model, so I'll have to defer to Ade on this one.

Ade I haven't read Andy's book but it's on my reading list but I've read the presentations that both Andy and Dave Thomas have given about the Dreyfus Skills Acquisition model. I think both books complement each other.

We had a candidate pattern called Learn How You Learn which isn't likely to make it into the book since only a few of our interviewees had used it. This pattern was about finding out which techniques for learning were the most effective for you and then trying to make sure you prioritised their usage accordingly. The idea being that if you're aware of the set of potential learning styles then you can detect situations where you're using an inappropriate style and fix it. However the people who had tried this seemed to see it mostly as choosing to learn from reading books or by trying to build things.

Compared to Andy's our work tends to focus more on the factors that propel someone through the different levels of the Dreyfus model using a system of values. In effect we're trying to place learning in a wider social context. Even though I'm a big fan of the Dreyfus model I think that that the shuhari model has some advantages over it for the kind of people we hope will read our book.

The Shuhari model (first you obey rules, then you try to break the rules, then you just do whatever makes sense in context) claims that each level still retains the qualities of the earlier levels. These are not so much steps on a ladder as concentric rings. This is important to software craftsmanship because of the tension between tacit knowledge (where somebody just knows the right thing to do in a given context) and the transmission of skill. If you're wandering around just doing whatever makes sense and you're getting great results then I'm happy for you. However in the craftsmanship model you need to actually pass your skills on. You have to break out of your unconscious competence by creating new rules or heuristics so that new apprentices can match and eventually surpass your achievements. If we don't do this we run the risk of falling into the same trap as people like Stradivari where the quality of their workshop's output declined after they died. As software craftsmen we have a duty to help to create teams that can sustain and even improve their skill level when we leave.

The only way this can work is if you retain an awareness of all the places where you don't have unconscious competence and have to blindly follow rules. You may be the world's greatest C programmer but the memory of all the times you have to cling to the rules for normalising a database schema is what makes it possible for you to start thinking about representing your C skills in way that others can imitate and enhance.

Atul Gawande's book, Better, has a wonderful section where he talks about the importance of checklists for people who are highly skilled. The checklist is how they make sure that they don't leave instruments inside the patient when an operation develops complications or forget to wash their hands after seeing a patient who is terminally ill. His point is that tacit knowledge is easily forgotten by experts in moments of great stress but the checklists can be used by the less skilled members of a team to keep everybody honest. In fact some of my fondest memories of ThoughtWorks projects are the moments when someone I've taught TDD asks "is there a test for that?" At that point I've either been caught as I'm about to cut a corner or I've been caught applying some meta-heuristic without explaining it first. Either way everybody learns something.

The Dreyfus model seems to be focused on helping the teacher tailor lessons based on their assessment of the level of their students. I agree about the importance of "directed skill acquisition" and that working with someone who knows what they're doing is definitely the best way to learn. However most apprentices won't have a teacher or mentor because there are surprisingly few people in the industry who want to provide guidance to others. You have to consciously set out to find these mentors who can guide you up the levels of whichever model you wish to use.


1 Chad Fowler read the same quote in the same blog post and wrote about the same concept in My Job Went to India.

Monday, December 08, 2008

An Authors Story: a cautionary tale

A few days ago, I was IMing with a friend who's working on a technical book. Things started out like this:

So, how's the book coming?

Yeah, I have some editing to do still..I can't speak for when it'll be published. I've pretty much lost all willpower to work on it :( so it's hard to finish up the deathmarch.

Why the lack of willpower? Have things been too long in the pipeline, or is it something else?

Then, a sort of cautionary tail rolled out over IM. I've reduced it to the following points. I wanted to post it so that others could benefit from the mistakes my friend and his publisher made.

It looks like there were several things that went wrong:

  • The publisher really wants things turned around faster than my friend is able to work on them, and they want them in discreet discrete (thanks anon) chunks that don't match well with the agile mindset a lot of programmers work from. If you're thinking about writing a book (or contracting a programmer to write one) watch out for mismatched expectations
  • After writing several chapters, the the publisher made some very major decisions about changing the style of the book without asking my friend about it. He wanted to write a narrative using idioms to guide the reader along, they wanted a more granular approach. Make sure both sides know what they're looking for up front, then stick to it.
  • At this point, my friend felt like he was losing steam and ownership. Ennui had set in, both the book and the schedule suffered. If the author isn't happy, I don't see how the publisher will be happy in the long run. The opposite certainly applies as well.

My friend has already written other books, but this was a new publisher and a new situation. Had he and the publisher partnered more, maybe this would have turned out better, I don't know. I do know that they both would have been happier, and a better book would probably be on the shelves already.

Update! You might also want to read A Tale of Two Books.


This post is part of a collection of articles about Publishing, Growing Markets, and Books.

Thursday, December 04, 2008

Benchmarking and Refactoring

This blog post is pulled from the Profiling and Optimizing Ruby tutorial I wrote for IBM DeveloperWorks a couple of years ago. If you'd like to reuse it, please note that the initial publication at DeveloperWorks must be attributed. I've reposted it here so that people can access it without signing up for a DeveloperWorks account.

Another section of this tutorial, Benchmarking Makes it Better was posted yesterday, it might be worth reading first if you don't have a good handle on benchmarking already.

If you're not comfortable with profiling Ruby Code, you might want to look at some of my other articles on Profiling:

The original tutorial used the following "shell_grabber.rb" script as example code:


file = "/etc/passwd"
File.open(file).each do |line|
  if line.match('/bin/bash') then
    print line.split(':')[0]
    puts " uses bash"
  end
end

Following the profiler

Now that you've seen profiling and benchmarking at work, it's time to put it all together. Back in listing 5 () you saw the profiling results of profiling our shell_grabber.rb script. The first five lines provide the biggest opportunities for optimization, so I've repeated them in


  %   cumulative   self              self     total
 time   seconds   seconds    calls  ms/call  ms/call  name
 42.55     0.20      0.20        1   200.00   470.00  IO#each
 21.28     0.30      0.10      690     0.14     0.20  Kernel.puts
 10.64     0.35      0.05     2070     0.02     0.02  IO#write
  8.51     0.39      0.04      690     0.06     0.07  Kernel.print
  6.38     0.42      0.03     1242     0.02     0.04  String#match

We've got two different places to try to improve our code; the use of a print and a puts in the same block (lines 4 and 5 in listing 12) which take up .69 seconds and 1380 total calls in our profiling run, and the construction of our match method which we call 1242 times (using .42 seconds). If we can clean either of these up, it will represent a win. If we can get both — so much the better.

Since these are two separate refactorings, we'll walk through them individually. Both follow the same pattern though:

  • Isolate the code you want to change
  • Benchmark your options
  • Make the change (if appropriate)
  • Test your new version
  • Check in your changes
In this tutorial, you actually get to cheat a little bit. Since this example only has one method, there's only a very small isolation step to worry about.

Cleaning up your match method

One way to explain the pain you're feeling here is that line.match('/bin/bash') is being repeated on each iteration. Ruby allows you to build a Regex object once then refer to it multiple times later. Listing 13 shows a benchmarking script which tests the impact of building our Regex prior to instead of during our loop. Here's what the benchmarking code would look like:


require 'benchmark'

n = 1000
file = "fixture"

Benchmark.bmbm(25) do |x|

x.report("build regex in block") do
    for i in 1..n do
      File.open(file).each do |line|
        if line.match('/bin/bash') then
          $stderr.print line.split(':')[0]
          $stderr.puts " uses /bin/bash"
        end
      end
    end
  end

  x.report("build regex prior") do
    for i in 1..n do
      re = /\/bin\/bash/
      File.open(file).each do |line|
        if line.match(re) then
          $stderr.print line.split(':')[0]
          $stderr.puts "uses /bin/bash"
        end
      end
    end
  end

end

Here are the results:


$ ./re_bench.rb 2> /dev/null
Rehearsal ------------------------------------------------------------
build regex in block      17.610000   0.820000  18.430000 ( 18.692166)
build regex prior          5.100000   0.340000   5.440000 (  5.465884)
-------------------------------------------------- total: 23.870000sec

                               user     system      total        real
build regex in block      17.550000   0.790000  18.340000 ( 18.421562)
build regex prior          5.170000   0.340000   5.510000 (  5.514125)

There's a pretty obvious win (over a 70% reduction in running time) by moving the algorithm out of the loop. This is a simple change to make:


file = "/etc/passwd"
File.open(file).each do |line|
  re = /\/bin\/bash/
  if line.match(re) then
    print line.split(':')[0]
    puts " uses bash"
  end
end

Having made the change, we need to test it to verify that everything works as planned. The simplest way to test a script this short is to verify the output. In this case we can run the output of the original script and of our new version through diff to ensure that they're the same. diff orig_output new_output exits with a return value of 0, meaning that there's no difference between the output of our two versions. For any real code, you really want to run a real test suite written with Test::Unit or Rspec.

While it's not strictly necessary, we can also look at the real time reductions in our new version. Using time with both versions of the script shows 0m0.064s of real time for the original version and 0m0.046s of real time for the new version — not quite the 70% reduction we saw in the benchmark, but a healthy change nonetheless. (Don't worry too much about not seeing the full improvement, time results can have hidden impacts from all manner of system activity — see also my blog post Benchmarking, Lies, and Statistics.)

Once you've run your tests and timed the results again, you should check it into your source control system so that you can revert if you later find a better way to do things.

Another Approach

It turns out that there's another way to improve the performance of your regex handling. Using the match method turns out to be slower than the using =~, but don't take my word for it. Let's benchmark it. A good benchmarking script is here:


require 'benchmark'

n = 1000
file = "fixture"


Benchmark.bmbm(25) do |x|

  x.report("build regex in block") do
    for i in 1..n do
      File.open(file).each do |line|
        if line.match('/bin/bash') then
          $stderr.print line.split(':')[0]
          $stderr.puts " uses /bin/bash"
        end
      end
    end
   end
  
   x.report("build regex prior") do
      for i in 1..n do
        re = %r{/bin/bash}
        File.open(file).each do |line|
          if line.match(re) then
            $stderr.print #{line.split(':')[0]
            $stderr.puts "uses /bin/bash"
          end
        end
      end
    end
  
    x.report("use =~ pattern") do
      for i in 1..n do
        File.open(file).each do |line|
          if line =~ /\/bin\/bash/ then
            $stderr.print #{line.split(':')[0]
            $stderr.puts "uses /bin/bash"
          end
      end
    end
  end
end

And the results are:


Rehearsal ------------------------------------------------------------
build regex in block      17.480000   0.470000  17.950000 ( 17.997961)
build regex prior          4.980000   0.330000   5.310000 (  5.322441)
use =~ pattern             4.230000   0.300000   4.530000 (  4.629864)
-------------------------------------------------- total: 27.790000sec

                               user     system      total        real
build regex in block      17.590000   0.480000  18.070000 ( 18.620850)
build regex prior          5.090000   0.310000   5.400000 (  5.498786)
use =~ pattern             4.300000   0.310000   4.610000 (  4.652404)

Our first change represents a 70% improvement over the original, but our new change is a 75% improvement according to the benchmark — a change worth making. Since you're keeping everything in some kind of source control, your first step will be to revert your last change. Then you can change the script to match this:


file = "fixture"
File.open(file).each do |line|
  if line =~ /\/bin\/bash/ then
    print line.split(':')[0]
    puts " uses bash"
  end
end

Once the changes have been made, you can go back and verify that the scripts generate the same output using diff as shown previously. Once they pass, you can check your new code in and move on.

Printing Once

We have two issues to consider with the way we're printing lines in our script. First, we we've got two distinct printing calls. Second we're using puts which adds the overhead of adding a \n to every line that doesn't already have one. Before we start making changes, let's do some benchmarking. Here's a banchmarking script:


require 'benchmark'

n = 50_000

Benchmark.bmbm(15) do |x|

  x.report("original version") do
    for i in 1..n do
      $stderr.print i, "and"
      $stderr.puts i
    end
  end

  x.report("single puts") do
    for i in 1..n do
     $stderr.puts "#{i} and #{i}"
    end
  end

  x.report("single print") do
    for i in 1..n do
     $stderr.print "#{i} and #{i}\n"
    end
  end

end

And here are the results:


$ ./print_benchmark.rb 2> /dev/null
Rehearsal ----------------------------------------------------
original version   0.370000   0.050000   0.420000 (  0.443343)
single puts        0.210000   0.020000   0.230000 (  0.240232)
single print       0.170000   0.020000   0.190000 (  0.181233)
------------------------------------------- total: 0.840000sec

                       user     system      total        real
original version   0.230000   0.030000   0.260000 (  0.272866)
single puts        0.220000   0.020000   0.240000 (  0.239319)
single print       0.170000   0.010000   0.180000 (  0.180624)
$

Both single printing call versions show an improvement over the original two printing calls version of the code, but the print version is significantly better than the puts version (a 33% improvement versus a 12% improvement). That makes your decision easy, the new version of the script (using a single print method) is shown below:


file = "fixture"
File.open(file).each do |line|
  if line =~ /\/bin\/bash/ then
    print "#{line.split(':')[0]} uses bash\n"
  end
end

Again, run your tests so that you know you've got a good version. time let's us see just how well our optimization worked. In this case, we got 0m0.046s for the previous version and 0m0.026s for our new version — a 40% speedup.

Refactoring Roundup

In the course of this post, we've made a reverted one change, and made two others. Our code is a bit more than 10% smaller, and runs much faster. Although you don't have unit tests, you've been able to verify your output using a functional test (and thus verified your code quality) at every step of the way, so you also know that you've not introduced any new errors.

While the script that you've been working on is small, you'll follow the same methodology on bigger programs.

First, make sure that your spending your time wisely. Going through a lot of work to speed up a script that takes under a second to run is probably not worth your time for a command line tool, but if you're building a program that will be part of a heavily used website even nano-seconds may count. (You'll need to make sure you've got enough iterations to be able to measure differences above the nano-second scale though, since any variation that small is liable to be caused by outside factors.) time and profile are the tools you'll rely on here.

Second, isolate the code you want to change. In our case, the isolation was pretty simple, but when you're working with an object with multiple public and private methods it will be a bit more difficult. There aren't really any tools to help with this, it all comes down to reading the profiler output and knowing your code.

Third, benchmark your options. Once you've identified and isolated the code your going to change, you can move it (perhaps with small changes) into a benchmarking script. Once you have benchmarking output you can make informed decisions about how to change your code.

Fourth, make the changes indicated by profiling and benchmarking. Now that you know what works the best, you can transplant that back into your actual code.

Fifth, test your new code. Always test your code. Test your code before every commit. Unit tests make this better and easier for almost all of your code, but be ready to roll your own tests in the odd cases that it doesn't.

Sixth, check in your changes. Revision control is incredibly important, it will save you at some point.

Wednesday, December 03, 2008

Benchmarking Makes it Better

This blog post is pulled from the Profiling and Optimizing Ruby tutorial I wrote for IBM DeveloperWorks a couple of years ago. If you'd like to reuse it, please note that the initial publication at DeveloperWorks must be attributed. I've reposted it here so that people can access it without signing up for a DeveloperWorks account.

The original tutorial used the following "shell_grabber.rb" script as example code:


file = "/etc/passwd"
File.open(file).each do |line|
  if line.match('/bin/bash') then
    print line.split(':')[0]
    puts " uses bash"
  end
end

A Benchmarking Primer

Before you can start replacing code, you should have an idea about how your intended replacement performs. The benchmark library is the tool you'll want to use for this. It's simple to use, you'll just write a benchmarking script that includes the methods you want to compare. The Benchmark class provides three instance methods to make this easier; bm, bmbm, and measure.

The bm method provides basic interface to the benchmark method. It takes an optional label_width argument. Each method tested can have a label specified. You define individual methods to be tested as shown here:


require 'benchmark'
n = 5_000_000
Benchmark.bm(15) do |x|
  x.report("for loop:")   { for i in 1..n; a = "1"; end }
  x.report("times:")      { n.times do   ; a = "1"; end }
  x.report("upto:")       { 1.upto(n) do ; a = "1"; end }
end

The bmbm method runs your benchmarking code twice, to help avoid complications from garbage collection. The first run is called the rehearsal, and is reported as part of the results. bmbm methods are defined in the same way that bm methods are. The only thing you'd need to change from listing 6 to use bmbm instead of bm methods is the call itself. To use the bmbm method, you'd write Benchmark.bmbm(15) do |x|.

Simple timing information can also be gathered using the measure method. (measure is used behind the scenes by the bm and bmbm methods to create their reports.) measure returns a Benchmark::Tms object that can be converted to a string for printing. A Ruby script using Benchmark#measure to time the construction of a string is shown below:


require 'benchmark'
puts Benchmark.measure { "foo" << "bar" << "baz" }

Comparing Code With benchmark.rb

Now that you've seen the basics of benchmarking, let's put it to use. We've already used time and profile.rb to look at the performance of our shell_grabber.rb script. Just pull out the part that does the work and put it into a new script like this.


require 'benchmark'
Benchmark.bm(25) do |x|
  x.report("original code") { 
    File.open("/etc/passwd").each do |line|
      if line.match('/bin/bash') then
        $stderr.print line.split(':')[0]
        $stderr.puts " uses /bin/bash"
      end
    end
  }
end

There's a change to the original code that are worth discussing. Instead of using bare print and puts statements, they're replaced with $stderr.print and $stderr.puts. You don't want to clutter the screen with hundreds or thousands of lines of output. This has the potential to dramaticly affect the timing of your code, so if you make a change like this to one method in your benchmark you'll want to make the same change to all of them. Take a look at just how much of a difference doing a $stdout.puts instead of a puts can make:


$stdout.puts  0.320000   0.040000   0.360000 (  0.396455)
naked puts    0.200000   0.020000   0.220000 (  0.262174)

Using Benchmarking Results

Using benchmarking results will help you choose which algorithm or which idiom will give you the best results. It's important that you don't throw off your results with improper tests though. For example, the benchmarking script shown above with the for, upto, and times methods only measured the performance of different forms of iteration. In this case it really doesn't matter what we do inside the iteration, as long as it's the same for each version. The output from that code modified to use Benchmark#bmbm is shown here:


Rehearsal --------------------------------------------------
for:             2.210000   0.030000   2.240000 (  2.288663)
times:           2.570000   0.060000   2.630000 (  2.623878)
upto:            2.530000   0.050000   2.580000 (  2.650115)
----------------------------------------- total: 7.450000sec

                     user     system      total        real
for:             2.080000   0.050000   2.130000 (  2.148350)
times:           2.590000   0.010000   2.600000 (  2.743801)
upto:            2.550000   0.030000   2.580000 (  2.588629)

Reading the output, you can see that there's a small but noticeable difference between the upto and times constructions, but that the real win comes with the for version — in this case, nearly a 20% improvement.

Notes

The importance of large datasets and/or multiple benchmarking runs was underscored while I was writing this section. In my initial version of the script, I only iterated through the loop 50,000 times. Then, when I ran the results the first time, the upto version of the iterator ran the fastest. When I moved up to 5,000,000 times through the loop and ran it a couple more times, I ended up hitting a consistent result of the for loop being better.

Dropping the number of iterations down to 5,000 continued to yield the best results for the for loop version in the non-rehearsal run, but would occasionally show the upto version as faster. Dropping down to 500 iterations showed near random results. In these cases, it appears that the time required for the run was so small (0.000606 seconds in one run) that external processes had an overwhelming influence on the benchmark.

Another dirty secret of benchmarking is that you'll need to do some extra work to ensure that your results are statistically valid. Please see my blog post Benchmarking, Lies, and Statistics for more details about this.

Finally, tune in tomorrow for a follow-up post on Benchmarking and Refactoring.

Tuesday, December 02, 2008

Tuesday Ruby Tab Sweep (2 Dec 2008)

As I rode in on the bus this morning, I found that I'd popped up a number of tabs about things Ruby. Here are the ones most worth sharing with some short notes:

What are you reading about today?

A 'Ruby Best Practices' Blogging Contest

Update: Due to the holiday press, Gregory and I will not be able to judge this until early 2009. That means everyone has an extra two weeks or so to submit an entry. We'll close up submissions on Jan 9, and announce the awards on the 13th.

Okay, best practices are a good thing, and Gregory Brown's upcoming Ruby Best Practices looks like it's going to be a good book. The question is "Are your best practices good for Ruby?".

O'Reilly has offered up two free 'tokens' for free rough cut access to Gregory's book for me to give away. Gregory and I talked about how we could best use them and decided to run a quick contest. We'd like to have you write up a blog post about a best practice that you've used with another language, or in another community, and how it would translate into Ruby. Talk about why it would help the Ruby community, or talk about why we should avoid it. Then, come back here and post a link in the comments below.

Be interesting, be controversial, but be quick. Gregory and I would like to award our two prizes before December 19th so that you can claim your rough cut before the end-of-December holiday of your choice.

Monday, December 01, 2008

Ruby Refactoring Workbook Interview

I've recently heard that Bill Wake and Kevin Rutherford are working on a Ruby version of his excellent Refactoring Workbook. Bill and Kevin were kind enough to spend a little time with me talking about their upcoming book.


First there was Design Patterns in Ruby, then a Ruby Edition of Refactoring, and now your reworking of Ruby Refactoring. What makes Ruby different enough to warrant these new editions?

Kevin A few things, I think.

Firstly, Fowler's original Refactoring book — which I hold in very high esteem — is predicated on the GoF design patterns, and on the constraints imposed by a statically typed language. Ruby's dynamic aspects — notably duck typing and open classes — mean that patterns such as Strategy, say, can be expressed differently, and the forces for applying those patterns will also subtly alter. That, in turn, means that some of the "moves" in Refactoring may need to be revised for Ruby. And in some cases, it means we need completely new refactorings.

Second, Ruby is now over ten years old, and during that time there's been little attention paid to code quality, maintainability and "habitability". Yet there must now be a growing body of "legacy" Ruby code out there. I think that's why we're seeing a rash of new code analysis tools for Ruby right now: the time is right.

Finally, Rails has lowered the bar for entry into the world of web services development, and brought Ruby programming to a more diverse audience. So there's a strong need for good books about coding practices, to support the Ruby-powered development newbie.

Bill Ruby's dynamic typing means that you have to adjust the mechanics of refactorings (and sometimes lean on the tests more). But for me, the driver was more the community than the language. It's a way for me to contribute, focusing on something like refactoring in a way that a reader familiar with Ruby can focus on the idea at hand, not constantly also having to translate from Java et al.

You mentioned that Ruby needs some completely new refactorings. Can you describe one of these?

Kevin Your best source for these is Jay Fields and Shane Harvie'sRefactoring: Ruby Edition, which is a re-working of Martin Fowler's original Refactoring. One of the new refactorings they introduce is "Replace Loop With Collection Closure Method". (Their book is available as a Safari rough-cut:.)

I'm also intrigued by your call for good books on coding practices for Ruby-powered newbies. Have you seen Gregory Brown's Ruby Best Practices? Are there other books you'd like to see written (or translated into Ruby)?

Kevin Yes, I've seen RBP; I think the time seems to be right — the Ruby community seems ready to step up to the next level of maturity in terms of "professionalism" in software development.

As for other moves, top of my list would be a Ruby-specific version of Michael Feathers' brilliant Working Effectively with Legacy Code.

What did you learn about refactoring as you started to apply the ideas in the Workbook to Ruby?

Bill I've become much less focused on the mechanics and more focused on the philosophy of small safe steps. Writing this book gave me the incentive to go back through and try to discern the overall patterns that refactorings use, as well as the small internal steps that have varying forms.

Kevin Lots! Coming from a C/C++/Java background back in the mists of time, I hadn't really appreciated just how much the GoF patterns and the Refactoring moves depend on the forces that arise from static typing. But when that static typing is removed, some of the concepts work less well.

Here's an interesting example: Fowler introduces the Feature Envy smell, which tells you to move a code fragment into the class whose fields it accesses the most. In the C family of languages this works well, because ints and strings can't acquire new methods: they act as a brake on the process. But in Ruby, you need a lot more static analysis in order to discern whether a particular object could become the new home of your code. The smell, and the refactorings that fix it, are easier to express and apply in a static language.

Another: Modules offer the Ruby programmer the opportunity to simulate a kind of multiple inheritance, using a mix-in style. Java doesn't have that, and so there's a whole set of possibilities open to us that Martin Fowler didn't have.

And from the Gof patterns: The shape of the Strategy pattern (someone once said that everything boils down to Strategy in the end) includes an abstract class/interface representing the common "type" of the actual strategies. It isn't needed in Ruby, due to duck typing.

All this meant that we spent a large portion of our time checking whether the patterns, code smells and refactorings still applied in Ruby, and what might be different or unnecessary.

What did you learn about Ruby as you applied the ideas?

Kevin Less, I think. I've been a semi-serious Rubyist since 2004, and during that time I had done a lot of refactoring. So I was used to working in Ruby as a malleable medium, even though I'm sure there are corners of ruby-fu I've never seen. We have also deliberately avoided delving too far into the details of any particular Ruby library — this is mostly a book about technique, rather than knowledge.

My principal concern about the new workbook is whether we have noticed all of the places where GoF, Refactoring and the original workbook depend subtly on static typing or other aspects of the Java experience. Which is why we've recruited an awesome band of reviewers to help us weed out those hidden assumptions.

Bill Two things hit me. The first is that I still seem to fight too many "environment" battles, trying to get tools and dependencies just right. But the second is that once I'm in the program and using Ruby, the language feels very expressive, with just the right thing ready-to-hand.