About

James Golick

James Golick is an engineer, entrepreneur, speaker, and above all else, a grinder.

As CTO (or something?) of BitLove, he scaled FetLife.com's traffic by more than an order of magnitude (and counting).

James spends most of his time writing ruby and scala, building infrastructure, and extinguishing fires.

He speaks regularly at conferences and blogs periodically, but James values shipping code over just about anything else.

Latest Tweets

follow me on Twitter

James on the Web

Opinionated Modular Code

Mar 28 2010

When you start writing modular code, and using techniques like dependency injection, you end up with a lot of pieces, but not necessarily an obvious whole. Working with java libraries can be an exercise in instantiating huge towers of dependencies to finally get to the object you actually need.

  val socket    = new TSocket(host, port)
  val protocol  = new TBinaryProtocol(socket)
  val client    = new TCassandra.Client(protocol)
  val cassandra = new Cassandra(keyspace, client)

My scala wrapper around cassandra's thrift bindings depends on an instance of TCassandra.Client, which depends on an instance of TProtocol (of which TBinaryProtocol is a subclass), which depends on an instance of TTransport (TSocket is subclass).

Thrift is necessarily modular. The user of a thrift service might wish to use a different TProtocol implementation or a non-blocking socket (TNonblockingSocket). Still, though, 4 lines of setup code just to get a simple cassandra client is cumbersome.

Contrasted with tightly coupled code, this appears to be a major drawback of composability. With coupled code, if you need an object, you just instantiate it and it creates — or refers directly to — all of the collaborators it needs. It's easy to imagine a tightly coupled version of my wrapper that instantiates hard dependencies.

class Cassandra(host: String, port: String) {
  val socket   = new TSocket(host, port)
  val protocol = new TBinaryProtocol(socket)
  val client   = new TCassandra.Client(protocol)
}

In a relatively high percentage of use cases, this set of defaults is perfectly acceptable. This kind of opinionated code is really easy to get started with, but can become problematic later on when its user decides she really does need that non-blocking socket. So, what to do?

It turns out it's possible to have it both ways. This is going to sound so simple that it's silly… but just create a second constructor that invokes the first one.

class Cassandra(client: TCassandra.Client) {
  def this(host: String, port: Int) = this(
    new TCassandra.Client(
      new TBinaryProtocol(
        new TSocket(host, port)
      )
    )
  )
}

This lowers the bar considerably to getting started with my Cassandra client. If somebody just wants to give it a try or whip up a quick and dirty program, they can do it easily — likely without even reading any documentation. But as the user's needs become more complex, I haven't shut them out of customizing the client to their heart's content.

Of course, as your object models become increasingly complex, there will be some additional effort required to maintain these auxiliary constructors. And arguably, you will create some degree of coupling between the classes. But it's mostly harmless. Provided it's possible to supply alternate dependencies, you have accomplished modularity. We're just adding sane defaults.

In ruby, I use default arguments to accomplish the same goal.

def initialize(klass, storage_factory = StorageFactory.new, table_creator=TableCreator.new)
  @klass           = klass
  @storage_factory = storage_factory
  @table_creator   = table_creator
end

This is an example from friendly's code base. The StorageProxy needs a StorageFactory and a TableCreator, so I create them if they aren't supplied.

This is all possible because the StorageFactory and TableCreator's default dependencies are set in exactly the same way. I think the only place I ever supply alternates is when I inject test doubles in StorageProxy's specs.

This makes for an object model that is really easy to work with, yet still highly modular. It's quick to get started with, but doesn't get in your way when you need to reach in and change some shit around. Making your classes both modular and convenient gets you the best of both worlds.


Crazy, Heretical, and Awesome: The Way I Write Rails Apps

Mar 21 2010

Note: This is going to sound crazy at first, but bear with me.

The current best-practice for writing rails code dictates that your business logic belongs in your model objects. Before that, it wasn't uncommon to see business logic scattered all over controller actions and even view code. Pushing business logic in to models makes apps easier to understand and test.

I used this technique rather successfully for quite some time. With plugins like resource_controller, building an app became simply a matter of implementing the views and the persistence layer. With so little in the controller, the focus was mainly on unit tests, which are easier to write and maintain than their functional counterparts. But it wasn't all roses.

As applications grew, test suites would get slow — like minutes slow. When you're depending on your persistence objects to do all of the work, your unit tests absolutely must hit the database, and hitting the database is slow. It's a given in the rails world: big app == slow tests.

But slow tests are bad. Developers are less likely to run them. And when they do, it takes forever, which often turns in to checking twitter, reading reddit, or a coffee break, harming productivity.

Also, coupling all of your business logic to your persistence objects can have weird side-effects. In our application, when something is created, an after_create callback generates an entry in the logs, which are used to produce the activity feed. What if I want to create an object without logging — say, in the console? I can't. Saving and logging are married forever and for all eternity.

When we deploy new features to production, we roll them out selectively. To achieve this, both versions of the code have to co-exist in the application. At some level, there's a conditional that sends the user down one code path or the other. Since both versions of the code typically use the same tables in the database, the persistence objects have to be flexible enough to work in either situation.

If calling #save triggers version 1 of the business logic, then you're basically out of luck. The idea of creating a database record is inseparable from all the actions that come before and after it.

Here Comes the Crazy Part

The solution is actually pretty simple. A simplified explanation of the problem is that we violated the Single Responsibility Principle. So, we're going to use standard object oriented techniques to separate the concerns of our model logic.

Let's look at the first example I mentioned: logging the creation of a user. Here's the tightly coupled version:

class User < ActiveRecord::Base
  after_create :log_creation

  protected
    def log_creation
      Log.new_user(self)
    end
end

To decouple the logging from the creation of the database record, we're going to use something called a service object. A service object is typically used to coordinate two or more objects; usually, the service object doesn't have any logic of its own (simplified definition). We're also going to use Dependency Injection so that we can mock everything out and make our tests awesomely fast (seconds not minutes). The implementation is simple:

class UserCreationService
  def initialize(user_klass = User, log_klass = Log)
    @user_klass = user_klass
    @log_klass  = log_klass
  end

  def create(params)
    @user_klass.create(params).tap do |u|
      @log_klass.new_user(u)
    end
  end
end

The specs:

describe UserCreationService do
  before do
    @user       = stub("User")
    @user_klass = stub("Class:User", :create   => @user)
    @log_klass  = stub("Class:Log",  :new_user => nil)
    @service    = UserCreationService.new(@user_klass, @log_klass)
    @params     = {:name => "Matz", :hobby => "Being Nice"}
    @service.create(@params)
  end

  it "creates the user with the supplied parameters" do
    @user_klass.should have_received(:create).with(@params)
  end

  it "logs the creation of the user" do
    @log_klass.should have_received(:new_user).with(@user)
  end
end

Aside from being able to create a user record in the console without triggering a log item, there are a few other advantages to this approach. The specs will run at lightning speed because no work is actually being done. We know that Fast specs make happier and more productive programmers.

Also, debugging the actions that occur after save becomes much simpler with this approach. Have you ever been in a situation where a model wouldn't save because a callback was mistakenly returning nil? Debugging (necessarily) opaque callback mechanisms is hard.

But then I'll have all these extra classes in my app!

Yeah, it's true. You might write a few more "class X; end"s with this approach. You might even write a few percent more lines of actual code. But you'll wind up with more maintainability for it (not to mention faster tests, code that's easier to understand, etc).

The truth is that in a simple application, obese persistence objects might never hurt. It's when things get a little more complicated than CRUD operations that these things start to pile up and become pain points. That's why so many rails plugins seem to get you 80% of the way there, like immediately, but then wind up taking forever to get that extra 20%.

Ever wondered why it seems impossible to write a really good state machine plugin — or why file uploads always seem to hurt eventually, even with something like paperclip? It's because these things don't belong coupled to persistence. The kinds of functionality that are typically jammed in to active record callbacks simply do not belong there.

Something like a file upload handler belongs in its own object (at least one!). An object that is properly encapsulated and thus isolated from the other things happening around it. A file upload handler shouldn't have to worry about how the name of the file gets stored to the database, let alone where it is in the persistence lifecycle and what that means. Are we in a transaction? Is it before or after save? Can we safely raise an error?

In the tightly coupled version of the example above, the interaction between the User object and the Log object are implicit. They're unstated side-effects of their respective implementations. In the UserCreationService version, they are completely explicit, stated nicely for any reader of our code to see. If we wanted to log conditionally (say, if the User object is valid), a plain old if statement would communicate our intent far better than simply returning false in a callback.

These kinds of interactions are hard enough to get right as it is. Properly separating concerns and responsibilities is a tried, tested, and true method for simplifying software development and maintenance. I'm not just pulling this stuff out of my ass.


On Mocks and Mockist Testing

Mar 10 2010

Every so often, somebody blogs about getting bit by what they usually call "over-mocking". That is, they mocked some object, its interface changed, but the tests that were using mocks didn't fail because they were using mocks. The conclusion is: "mocks are bad".

Martin Fowler outlines two kinds of unit testers: stateist and mockist. To simplify things for a minute, a stateist tester asserts that a method returns a particular value. A mockist tester asserts that a method triggers a specific set of interactions with the object's dependencies. The "mocks are bad" crowd is arguing for a wholly stateist approach to unit testing.

On the surface, stateist testing seems certainly more convenient. A mockist is burdened with maintaining both the implementation of an object and its various test doubles. So why mocks? It seems like a lot of extra work for nothing.

Why Mocks?

A better place to start might be: what are the goals of unit testing?

For a stateist tester, unit tests serve primarily as a safety net. They catch regressions, and thus facilitate confident refactoring. If the tests are written in advance of the implementation (whether Test Driven or simply test-first), a stateist tester will derive some design benefit from their tests by virtue of designing an object's interface from the perspective of its user.

A mockist draws a thick line between unit tests and functional or integration tests. For a mockist, a unit test must only test a single unit. Test doubles replace any and all dependencies, ensuring that only an error in the object under test will cause a failure. A few design patterns facilitate this style of testing.

Dependency Injection is at the top of the list. In order to properly isolate the object under test, its dependencies must be replaced with doubles. In order to replace an object's dependencies with doubles, they must be supplied to its constructor (injected) rather than referred to explicitly in the class definition.

class VideoUploader
  def initialize(persister = Persister.new)
    @persister = persister
  end

  def create(parameters)
    @persister.save(parameters[:temp_file_name])
  end
end

When we're unit testing the above VideoUploader (ruby code, by the way), it's easy to see how we'd replace the concrete Persister implementation with a fake persister for test purposes. Rather than test that the file was actually saved to the file system (the stateist test), the mockist tester would simply assert that the persister mock was invoked correctly.

This design has the benefit of easily supporting alternate persister implementations. Instead of persisting to the filesystem, we may wish to persist videos to Amazon's S3. With this design, it's as simple as implementing an S3Persister that conforms to the persister's interface, and injecting an instance of it.

This is possible because the VideoUploader is decoupled from the Persister. If the Persister class was referred to explicitly in the VideoUploader, it would be far more difficult to replace it with a different implementation. For more on decoupled code, you must read Nick Kallen's excellent article that goes in to far more detail on these patterns and their benefits.

To be sure, we're really talking more about Dependency Injection here than anything else, and stateist testers can and do make use of DI. But the mockist test paradigm prods us towards this sort of design.

We're forced to look at the system we're building in terms of objects' interactions and boundaries. This is because it tends to be quite painful (impossible in many languages) and verbose to unit test tightly coupled code in a mockist style.

So the primary goal of a mockist's unit tests is to guide design of their object model. Making it difficult to couple objects tightly is one such guiding force.

Mockist tests also tend to highlight objects that violate the Single Responsibility Principle since their tests become a jungle of test double setup code. We can think of mockist testing like a kind of shock therapy that pushes you towards a certain kind of design. You can ignore it, but it'll hurt.

Failure isolation is probably the other big advantage of mockist tests. If your unit tests are correctly isolated, you can be sure exactly which object is responsible for a test failure. With stateist tests, a given unit test could fail if the unit or any of its dependencies are broken.

But is it worth it?

Mockist or Stateist?

The burden of maintaining mocks is by far the most common argument against mockist tests. You have to write both the implementation and at least one test double. When one changes, the other has to change too.

Perhaps most troubling, if an object's interface changes, its dependencies' unit tests will continue to pass because the mock objects will function as always — arguably a hinderance to refactoring. Since you need to test for that scenario, mockists also write integration tests. Integration tests are probably a good idea anyway, but as a mockist, you don't really have a choice.

Also, the refactoring problem only applies to dynamic languages. In a statically typed language, the program will simply fail to compile.

I find this burden troubling. More code to write makes the “we don't have time” argument come out in pressure situations. For a design exercise, the cost of mockist tests seems quite high.

On my last open source project (friendly), I decided to give mockist testing a try. Most of the code turned out beautifully. And the mistakes I did make could have been avoided had I listened to the pain I felt while testing them.

Since that project worked out well, I've been applying mockist techniques to other work. I've written mockist tests in everything from my scala projects to my rails apps. So far, so good.

In theory, I hate the idea of mockist tests. They just seem like too much work. I don't want to like them and remain reluctant to admit that I do. But in practice, I'm writing better code, and it's hard to hate that.