After the announcement of Diamondback Ruby on ruby-talk a bit ago, I decided to contact the developers to learn more about what they're doing. Two of the team members, Mike Hicks and Mike Furr, and I ended up having quite a conversation that I'm posting as the interview below.
What do you hope to learn from this project?
Mike Hicks There is a long-discussed tension between statically (or explicitly)-typed languages like Java and dynamically (or implicitly)-typed languages like Ruby. My hope has been to discover how to include the best aspects of static and dynamic typing in a single language. We all really like Ruby's design and features, and felt it was the right language to start with. As we go, we'll look to derive general design principles that make sense for most any programming language, to simplify and improve the process of programming generally.
Static types are useful at catching bugs early, and serving as useful documentation (indeed, RDoc includes a pseudo-type for Ruby library methods). But static type systems may reject perfectly correct programs due to their imprecision, and thus can "get in your way," particularly when doing rapid prototyping. On the other hand, dynamic types suffer no imprecision, but delay discovery of some bugs until run-time. It can be particularly annoying to mistype the name of a method in a call deep within your complicated program, and have the program fail after running for a long while with "method not found," when having a type checker would have immediately revealed the mistake. The challenge is to incorporate the best bits of both approaches, e.g., to not reject programs prematurely (as a static type system could) while finding as many certain errors in advance as possible.
Mike Furr We had two main hypotheses going into this research. The first hypothesis was that development in dynamic languages doesn't scale. This is a hard thing to measure, but there is a fair amount of anecdotal evidence that seems to support this. For example, just recently a few of the Twitter developers were interviewed about moving their infrastructure from Ruby to Scala, and static typing was mentioned as one of their advantages of Scala over Ruby.
The second hypothesis we had was that most people "think in types" even if they don't write them down while programming. Thus, we imagined that most code written by Ruby programmers would be accepted by a sufficiently advanced static type system. Thus we hoped to be able to design a type system for Ruby that was powerful enough to handle the kind of Ruby code people actually write, but not be so complex that it was impossible to use.
What made you choose Ruby as your test implementation instead of Python, Perl, or one of the other widely used dynamicly typed languages?
Mike Furr When we were first throwing around ideas about analyzing a scripting language, Ruby seemed to be the language with the most momentum, likely because of the release of Ruby on Rails a few years earlier. Ruby is also a rather young language and its syntax and semantics are continuing to evolve. Ideally, we hope that our research could influence future directions for Ruby, although much of our research would be applicable to other languages as well.
Mike Hicks I really like Ruby's design, particularly the principles that "everything is an object" and "duck typing." We also liked that Ruby was the core of Ruby on Rails, whose popularity was increasing. In our exploration of Perl, an early contender, we became frustrated with the huge number of overlapping language features, and quite surprising behavior in many instances. We didn't see how to write a useful static analysis for Perl programs without a lot of difficulty. We thought about Python only cursorily, and I don't recall any particular downsides that came up.
Why did you choose OCaml as your implementation language?
Mike Hicks Our group at Maryland uses OCaml almost exclusively for writing static analysis tools, particularly for analyzing C code, so it was natural to want to use it for this project, too. OCaml, in my view, is the perfect language for writing a compiler or analysis: its syntax is terse (as compared to Java, say), and features such as first-class functions and pattern matching very much simplify the process of writing tools that analyze or manipulate structured data, like abstract syntax trees. We followed the lead of the CIL (C Intermediate Language) project, also written in OCaml, in designing RIL (Ruby Intermediate Language), the core of DRuby. For example, both CIL and RIL syntax trees are simplified after parsing to make analysis more manageable.
Mike Furr OCaml is my favorite language to program in and I have been using it throughout my time in graduate school. However, I also think it is the right tool for the job. The quintessential example for functional programming languages is writing a compiler, and DRuby is essentially a compiler front-end. OCaml's type system is also a real asset in developing a complex code that manipulates abstract syntax trees.
A lot of folks seem to think that you've written a Ruby implementation in OCaml instead of a type analyzer for the existing 1.8 Ruby. Do you think an OCaml implementation of the language would be a good thing? Why or why not?
Mike Hicks This is a hard question to answer. Why might one want to implement an interpreter in one language vs. another? I can imagine several reasons: performance, portability, maintainability, and reliability, among others. Developers often implement interpreters or VMs in C/C++ for reasons of performance and portability. But C and C++ encourage programming "on the edge of safety," so mistakes can lead to crashes, security vulnerabilities, etc., hurting reliability and maintainability. By contrast, coding in a high level language, e.g., Java or OCaml, avoids many reliability problems, thanks to type safety and garbage collection, but at the cost of some performance. (For grain-of-salt interlanguage performance metrics, check out the Computer Language Benchmarks Game, http://shootout.alioth.debian.org.) And the language is really well-suited to writing compilers and interpreters, thanks to its rich structured datatypes and pattern matching. In my experience, an OCaml-based compiler or interpreter is much more succinct than one written in Java. So I think it's a good option.
Mike Furr There are already several implementations of Ruby available and so adding another one simply because it was implemented in OCaml doesn't seem like a good idea to me. Maintaining an implementation of Ruby is a lot of work since the language continues to evolve its syntax and semantics from version to version and the API of Ruby's standard library is also tied to particular Ruby versions. Instead, there would need to be a fundamental new feature that an OCaml implementation would provide that developers would find useful. For example, it might be interesting to explore compiling Ruby programs to native code using the OCaml bindings of LLVM and doing type driven optimizations based on Diamondback Ruby's type system. This would be a lot of work and I have no idea if the resulting code would be any faster than some of the newer Ruby virtual machines, but it could be a fun project.
How does static type inference affect the balance between testing and debugging? How does it affect the testing and debugging processes?
Mike Hicks Type inference is a debugging aid, I suppose. It is meant to help identify bugs that could come up, and do so without requiring you run your program. It is not a replacement for testing, though. Essentially it finds out whether you are programming with a certain level of consistently; if in one place you declare your method to take three arguments but elsewhere call the method with four, that's an inconsistency. But it doesn't prove that your code does "the right thing," e.g., whether you formatted your output string correctly. You need tests for that. Our approach allows one to help the other. When you write tests, DRuby will profile their execution to provide information that helps type inference. And type inference helps you identify some bugs without having to run tests.
Mike Furr Static typing is a tool just like testing frameworks and debuggers. All of them are meant to improve the quality of the software being developed, and each has their own advantages and disadvantages. The major advantage of static analysis is that it is able to reason about every path through your program simultaneously (and before it is run). Static types also provide terse, verified documentation. If you method has a type annotation that says it returns a Fixnum, that annotation will never become stale and can be trusted by any other developer who is calling your method.
However, static typing isn't perfect and is not meant to replace other QA techniques such as testing. One of the goals for DRuby is to allow programmers to incrementally add static types to their code bases, allowing them to benefit from extra checking where they want, without requiring changes to the entire code base.
You mentioned that you've found several potential errors in Ruby libraries and prgrams as a result of type inference analysis. What kinds of problems are you finding? How could the Ruby community take advantage of these kinds of discoveries?
Mike Furr The Ruby community has accepted test driven development as a standard practice and so we didn't expect to find a large number of errors. However, getting 100% testing coverage is often difficult and, not surprisingly, many of the bugs we found were in error handling code that was not exercised by any test cases. Some of these bugs were extremely simple, like misspelling a variable name, or referencing a method that did not exist.
One bug that I found particularly interesting was where a program called a method in the "File" class that didn't exist. This code was covered by the test suite and didn't cause a test failure. The reason for this was because the testing code monkey patched the File class to add the method before running the test suite. Thus, you would only encounter the problem if you executed the code outside of the test suite.
We hope that DRuby will develop into a tool that developers can run on their projects as part of their own quality assurance process. In the mean time, we have been filling bug reports for the errors we discovered so that the authors can fix them. For example, we found 2 errors in the Rubygems package manager that have already been fixed in their latest release.
What kind of feedback are you getting from Rubyists?
Mike Furr We've gotten some very encouraging feedback so far. In fact, despite the legendary flame wars between static and dynamic typing, I haven't received any negative comments about the idea of bringing static types to Ruby. A lot of people are using Ruby these days and so a tool that can help improve their development process through finding bugs or improving documentation is clearly appealing. Diamondback Ruby still needs some polishing so that programmers can begin using it on their own projects, and this is something we are going to continue to work on. Eventually, we'd like to perform some user studies to measure the effectiveness of the various features of Diamondback Ruby, and so its usability is certainly important to us.
How well do Ruby programs perform under Diamondback Ruby?
Mike Furr Diamondback Ruby uses a combination of static and dynamic checks to ensure that Ruby programs are well typed. Programs that can be checked purely statically (which we hope will be most of the time) will have no overhead at all since the programs can be safely run by a traditional Ruby interpreter unchanged. However, if the program does require a runtime check, then individual objects or methods may be instrumented to ensure they don't violate their types. When dynamically checking objects, we instrument the eigenclass of the individual object so that only methods calls to that object must be checked (not every object of the same class). Thus the checks are pay-as-you-go: the more objects that require dynamic checks, the higher the overhead. Therefore, it's hard to use a single measurement to quantify the overhead as it can vary from execution to execution of an application. I have run some micro-benchmarks and observed a 15% slowdown in one case, but this data point should be taken with a grain of salt, as it was merely to convince myself that the instrumentation was working and not egregiously slow. An application that rarely uses a dynamically checked object may see almost no overhead, but if the application calls methods on that object in a tight loop, it could be significantly higher.
Are you using the RubySpec framework to drive your implementation?
Mike Furr DRuby includes a dynamic analysis that allows us to reason more precisely about features such as eval(). This analysis requires us to parse the original Ruby program into our intermediate language, add any instrumentation code, and then write the transformed program out to a separate location on disk to be executed by the Ruby interpreter. This whole process was rather tricky to get right, and we used the RubySpec test suite to ensure that our transformations were correct. It was definitely a great help to have such a comprehensive test suite.
We haven't used the RubySpec tests to drive any type analysis for the standard library just yet, but I can definitely see using it in the future as we continue our research.
I'd really like to see OCaml get more play, but I keep seeing books like this and wonder when a good OCaml book for non-Math/CS types is going to hit the shelves. What will it take to get OCaml in front of more developers?
Mike Hicks My observation is that languages take off when library or framework support for some important set of applications hits critical mass. Then developers interested in that application intuit that it's easiest to build that app in a certain language or framework, and then go off and learn what they need to learn. Then those developers start building more libraries and the language is used for other things. I think we can see this trend in Perl (first killer app: text processing), Java (first killer app: applets), Ruby (first killer app: Rails), etc. We're starting to see more adoption of Erlang, thanks to the rise in multi-core and high-availability commercial systems, and we're seeing a growth in Haskell, at least in part because of all of the code you can get for it (though I can't speculate on what its killer app is).
When I first started doing work in static analysis, C/C++ were the languages of choice, oftentimes building on gcc or other existing tools. But then George Necula and his students wrote CIL (C Intermediate Language). Nowadays many, many tools are written using CIL as the front end and intermediate language, by people who never were "functional programming people." CIL was just so much better, clearly, than anything else, that people flocked to it. (As of today there are 297 citations to the CIL paper, according to Google Scholar — esp. noteworthy for an "infrastructure" piece of work.)
Of course, C analysis and other "symbolic computations" on programming languages are a niche area, and not likely to bring in the masses. OCaml still needs that breakthrough use-case of great interest that will push it over the top. It remains to be seen what that will be. But once it's found, the books, tools, etc. will all follow.
Mike Furr I agree with everything that Mike (Hicks) said but would also add that OCaml needs to overcome its branding as an "academic language". I have found that a lot of programmers think of functional programming as a fringe concept, perhaps invoking bad memories of struggling with it as an CS major. At the same time, one of the features people really seem to love about Ruby are blocks, which of course are a functional programming technique. I think that Ruby's syntax plays an important role here: programmers don't have to understand what a higher order function is to be able to use a block and yet they can immediately see the usefulness of the technique. However, a functional programmer might find this syntax restrictive (why only one block per method?). Perhaps the road to OCaml's adoption will be through Ruby which gives a gentler introduction to some of the same ideas used in ML.
6 comments:
Take a look at Typed Scheme and its means for gradually moving modules from the Untyped to the Typed world.
We started with inference in 1988 (Soft Scheme) and explored it for 22 years before I gave up and moved on to explicitly and statically typed languages as a "target". Inference is way too brittle.
-- Matthias
Matthias,
thanks for the pointer to typed scheme. Do you think inference is useful for those that want some measure of static typing in a dynamically typed language.
Quick response to Matthias---
Typed Scheme, and indeed the referenced work on soft typing, has been a great inspiration and source of ideas for our work. So, thanks!
I agree that inference is no good on its own, both because it's brittle, and because it's too slow to scale. DRuby also relies on typing annotations for the most expressive types, and uses contracts/gradual typing-style run-time enforcement if we cannot ensure that a declared type of a function matches its implementation.
In 1988 I was inspired by Cartwright (Rice) to start a program on applying type inference to untyped "prototyping languages". Back then this was the word you used for what you now call "scripting languages." PLT Scheme in particular shares many characteristics with scripting languages and is used as such.
At my suggestion, Cartwright and Fagan tried ML type inference with a different type algebra. I then worked with three more PhD students on other variations of type inference for dynamically typed languages. (I am probably the only one in academic PL who has insisted on dynamic typing for that long.) My students and I turned two such academic approaches into 'products' that I then used in courses. What it all boiled down to was that when the inferencer worked, we and our students had no problem. Most of the time it didn't work though: students either got error messages (like in OCAML and Haskell) that nobody understood; or, the inference produced a type that was so large that nobody knew whether it was right. Small syntactic changes in one part of the program could affect types for expressions somewhere else.
(The experiences of others around this time were similar: Aiken, Henglein, Pottier, and the Erlang people who tried soft typing for dynamically typed languages as it has become know.)
So around 1999/2000 I put a first student on a project that integrated an explicitly and statically typed language with the same syntax and glue them together at module boundaries, all the while guaranteeing the type soundness of the approach. Over the last three years, Sam Tobin-Hochstadt has turned this idea now into a product: Typed Scheme.
I now see the role for type inference as an environmental tool for recovering types that scripter had in mind when they wrote the program in the first place. But the typing itself must become explicit. In a way I have taken 20 years to confirm the (philosophical) Cardelli-Harper-Mitchell writings of the early 1990s.
I'll talk about all this in an invited talk at theScripts to Programs workshop, which should be of interest to everyone in the scripting world.
Mike, I am aware that you guys cite our papers and credit us as inspiration. (Thanks) Just thought the Ruby community might benefit from a reference to the larger picture. -- Matthias
Cobra, a Python influenced .NET language, has static and dynamic types, like objective-C.
See http://cobra-language.com/ for more information. And Cobra is elegant, more elegant than Python!
-- Csaba
Post a Comment