He works for a Provo, Utah based startup MoneyDesktop. Where he helped them transition away from a less-than-desirable PHP solution to Rails. They now enjoy an entirely new service-architecture driven by Ruby (and Protobuf). When not working with Ruby, he runs OneSimpleGoal and plays around with iOS and Objective-C.
To get another take on Protocol Buffers, I asked BJ to join me for a quick interview. Enjoy!
How did you get started using Protocol Buffers?
BJ: At the beginning of 2010 I was hired by a startup in Provo to help build out their product offering. The entire application was written in Java, but for the piece I was to be in charge of I was given free reign to choose a platform. Of course I chose Ruby, but it soon became apparent that we needed a solid way to get data from one application to the other.
This need launched a refactor to a more service-oriented approach. Different solutions were researched for dealing with data interchange such as Thrift and the like, but we ended up choosing Protobuf for its simplicity, pedigree, and multi-platform support. No XML, no WSDL, just simple definitions compiled to the language of your choice. Defining a Data Structure and API with one declarative language, and then being able to build the client and server implementations in two different languages was a huge win. We created a Socket-based RPC server on the Java side, and called the endpoints from Ruby. It was very simple.
I'm now with a new company and the new team was very receptive to the idea of a Protobuf Service ecosystem for our service-oriented application. It is currently the primary method of internal data interchange between multiple service applications. At the time of writing, we have over 20 different proto definition files, 63 separate defined data types (including Enums), 15 independent service classes implementing a total of 32 service endpoints.
What do you see as the strengths of the Protocol Buffers data format?
BJ: One of the greatest strengths of Protobuf is its clear data definitions. Open up any .proto file and it's not hard to deduce the structure of the represented Data Types. Defining Service endpoints is similarly simple, meaning all of the ambiguity of WIki-based (or similar) API documentation is immediately eliminated. Clarity is such a key when building a large system with a team of any size. Being able to clearly understand how and what data is transferred within the system is absolutely key, especially when you hire beyond your core development team and need to get people contributing quickly.
I've already mentioned the power we gained from being able to tie together a Service architecture with multiple languages in a unified API. The Protobuf project officially supports Java, C++, and Python implementations for the definitions compiler and data serialization code, but they have a ton of third party code listed for many other languages like Objective-C and JavaScript (with support in Node.js as well).
Which Protocol Buffers implementation are you using? How did you end up choosing it?
BJ: The only Ruby project listed on Protobuf's "Third Party" page (at the time) was Mack's Ruby-Protobuf. This was a great start as the compiler was built in YACC. However, once I started integrating the API into our Ruby application, it became clear that the RPC side had been half-baked and just sort of thrown out into the wild. Files were compiled and stubbed in the wrong places, meaning that if I added any code to the stubbed client or server files, subsequent compiles would overwrite my changes. Not good.
By that time we were full-steam ahead on the Protobuf implementation in the other services, so I basically had to go in and rewrite the compiler code generation for each of the services, as well as a complete rewrite the entire RPC backend to become compatible with the Protobuf SocketRPC library written for Java. Since that first rewrite at the early part of 2010, I've since done another rewrite (late 2010) to use EventMachine as the RPC backend and I can tell you its lightyears faster, and the DSL is much sexier also, looking much more like an AJAX request with callbacks than a standard socket connection with byte-reading hell. You can get that code on my github fork on the compatability-0.4.0 branch.
What are your plans for you fork of Mack's ruby-protobuf? Will it get wrapped into his distribution or will you go all the way, rename it, and start publishing it as a gem?
BJ: Fantastic question. Currently I've packaged the gem internally for our SOA ecosystem to get around the problem of getting it into a full release with the original code. I've embarked in merge-hell attempting to get my code to work with theirs several times now and each time it just feels like it's not worth it. I've yet to have contact with the original developers (I'm fairly sure they live in Japan) and so I'm not entirely sure they'd accept any patches I'd send anyways.
I've also toyed with the idea that since I've changed a significant chunk of the original code I could just make it my own gem with some witty name (and a reference to the original). The only thing that has kept me from that path is that a) I'd prefer not to insult the original developers, and b) I'm a bit ashamed that there aren't very many tests backing up the RPC backend (the major piece that I wrote from scratch).
Each day we have thousands of successful RPC calls with a virtually non-existent error rate running through the EventMachine RPC code written into this gem, so it has certainly been battle tested in a heavily used production system. Unfortunately it just doesn't have that warm fuzzy feeling (for those who haven't used it yet) that you get when you have 200 green tests behind each class. However, patches with tests are certainly welcome :).
Anyone can pull from my fork on the compatibility-0.4.0 branch (essentially my "master" I build the gem from) and build their own gem if they wish. The current version in my fork is 0.4.0.8. I'd be happy to provide any answers to questions that may arise, and I may even be available to consult with anyone on how to implement Protobuf into your current system.
You gave a presentation on Protocol Buffers at uv.rb. How was it received? Do you see more people starting to use this data format?
BJ: To be honest, I'm not sure my presentation went the way I'd hoped, certainly not well enough to highlight many of the benefits and reasons for using Probotuf. I spent too much time showing the "How" instead of the "Why". I think many people left the meeting intrigued but it was also marred by a drawn-out rant by a few of the developers that were present, debating whether or not it was more prudent to use REST/JSON than a more declarative format like Protobuf.
The argument is moot simply because both styles are great, they just fulfill slightly different needs. When it comes to "Code as Documentation" its hard to argue against Protobuf, a format that is much easier for devs from other languages to buy into. I've never had a developer come to work on a Protobuf API who, after being shown the .proto files, could not understand how to read or extend the definitions.
I hope that developers will give the format a try because I think it's the next level up from normal web application design. It's the start of understanding that for larger applications, different tools should be considered to help alleviate the pains of a (potentially) larger system and the needs of moving data from one place to another on the fly.
Ok, that's a pretty intriguing statement. What different tools should we be looking at (or developing) to work on larger systems and larger data sets?
BJ: Hopefully I don't get myself into too much hot water with the answer to this question (or go off on a large tangent), but here we go. Keep in mind also that this long-winded answer comes with a grain of salt, because every system will be designed to meet different goals. Therefore, there is no "one true way" as some would tout.
That being said, if you are looking to build a system for growth, there are certain concepts and technologies that should at least be considered from the outset. Service-oriented Architecture (SOA) is a way of designing a system for growth, to me it's the most natural way to begin with the journey in mind. For those new to SOA, a short primer: It involves creating smaller independent applications that are easier to write and maintain because they focus on smaller feature sets, while when roped together you can gain the benefit of all the systems working as a whole and ready to scale.
In this type of system we never want to share data between service applications directly, such as connecting from Service A to Service B's database to get user data. We share data by creating APIs for each service application (with protobuf of course :)), then publish those APIs for our other services to consume. If one application needs user data, it doesn't connect to the user database, it connects to the internal User service's API to gather the data. Naturally protobuf fits extremely well here, but REST/JSON or SOAP or (insert other transport protocol here) can obviously be used also.
Other "large systems" or so-called "enterprise" technologies that fit well into an SOA system are background jobs (queues) and various types of messaging systems.
Queueing is essential for the speed and scalability of a system as it offloads non-relevant (yet important) processing to seperate threads or processes. A simple example of how a queue can give you an increase in speed and usability of a system is sending an email when a user is created. The user generally doesn't care (or know) that you are sending him an email when their account is created, but they do care that if its taking 10 seconds. So rather than tie up the user's process just to send an email, you would queue that "job" for later (even if it's processed milliseconds later) and let the process return the result of the user creation. Workers in other threads or processes will pick up the email job and send the email for you.
The main queueing system we use is Github's excellent Resque coupled with my own little resque-remote plugin. Resque-remote gives us the ability to queue a job for another service to consume.
Messaging is such an enormous topic that I'm not sure I'm the one you want to describe its ins and outs. The short of it is that in certain contexts we've found that it can make more sense to use push-based data transfer rather than pull-based. Take the user creation example: when a user is created in my User Service Application, the user service doesn't know about any other systems that may be interested that a user was created, and frankly it shouldn't care. The User Service should only be responsible to post a message (to a message service or bus) that an event occurred in the system, in this case a user was created. Once the event is messaged, user service creation can go about its merry way. Other parts of the system may be listening to the message (event) bus for user creation events and their associated data, and they will receive the data as a push. This specific messaging paradigm is usually referred to as PubSub (Publish/Subscribe). As I've already mentioned, there are many many more types of messaging patterns that can be followed.
These are just a few of the systems we've put in place to manage data transfer complexity in our SOA ecosystem. There's also another branch for data warehousing such as ETL data transfer systems like Pentaho or Jasper. The possibilities are... well, you get the idea.
The coolest part about all of this is that you can use Ruby for 100% of these so-called enterprise situations. We do. You don't have to use Java or .NET to solve "Big Boy" problems. When I first started with Ruby, I wasn't entirely sure of this, but I certainly am now.
So, you've read along this far. What do you think? How are you using Protocol Buffers? Why did you choose to go down this route?
3 comments:
Great interview Pat. I'm happy to see BJ's good work getting some deserved attention.
As the developer alluded to at uv.rb, I'd like to take the opportunity to apologize for derailing the discussion at the end of the meeting. We had a healthy discussion, but I do not think it was a "rant". We were focusing on more of the "whys" after seeing a really great presentation on the "how". It really was a fantastic presentation, and I for one am really glad that it was as technical and detailed as it was. (You really should have been there Pat.)
Having spent some time using protobuf in Ruby, my thoughts continue to be that if you can control the infrastructure, from front to back, and you don't have to integrate with C++ or Java systems, then I don't understand why declaring all the data types and messages up front in .proto files and sharing them across all your systems makes sense. Its a valid approach for integrating with systems that use a static type check at compile time, and if that is your situation then its a great solution. But if all your systems are already in Ruby I'm not sure I buy the argument that protobuf is any more declarative than any other SOA approach.
There's also Beefcake for proto-buf in ruby, but it looks like it only does serializing and deserializing.
Why does this depend on EventMachine? Isn't this essentially a data format?
Post a Comment