Bringing Computer Savvy & Web 2.0 to Pre-Teen Girls

Mar 30, 2009

I'm a huge fan of screencasting. It's a great way to learn about software development. Screencasting is also starting to be used to teach general daily computing topics. Nevertheless, it remains something of a niche. The general public still doesn't really know about screencasting.

There certainly isn't a whole lot of screencast content aimed at kids, except maybe a programming tutorial or two. There's nothing (I can find) aimed at young girls.

Hailey Hacks aims to fill that void in the market. Hailey is a savvy pre-teen girl. She's fun — and a little bit rebellious. You might think of her as a cross between Bart and Lisa Simpson.

In the first episode, titled April Fools, Hailey teaches you how to play pranks, low tech and high. First, she suggests that you fill your father's shampoo bottle with ketchup. Later in the episode, she recommends using to incessantly ring your father's phone while you're at school. I'll let Hailey show you the rest of them.

So far, we've got one other episode up there, in which Hailey teaches you to make your own lolcats. Lots more are coming. Hear first by following @HaileyHacks or subscribing to Hailey's blog.

Despite the fact that this is web 2.0, we're selling the videos for 2$ each over at the Hailey Hacks site. We'd love to hear what you and your kids think of them!

Firefox-style cmd-1 through cmd-9 shortcuts for switching tabs in Safari!

Feb 26, 2009

Safari is by far the best browser on OS X. It's fast, has an awesome UI, and just feels better integrated in to the Mac experience.

As former die-hard firefox users, though, I couldn't live without cmd-1 through cmd-9 for switching tabs. Thanks to Safari Commander, now, I don't have to!

For 5$, one person can use Safari Commander on as many of *their* computers as they like. Two members of the same family with separate computers should purchase two copies.

WARNING: REQUIRES SAFARI 4 BETA. Available for download here. So far, I've found it very stable and worth it for this extension alone.

Buy it at the giraffesoft store!

FLOSS week

Feb 16, 2009

We at giraffesoft went over a year without even having a website. Then, we lasted a few months with a website, but no blog. That ends today, hopefully with something of a bang.

We're big believers in extracting functionality sooner rather than later. This practice has resulted in a ton of code that we use in all of our projects — nice and DRY.

Only problem is, we never get around to releasing this stuff. Starting a blog seemed like a good excuse to spend some time polishing up some code and, you know, writing READMEs.

So, starting today, we'll be releasing an open source project every day this week. They'll probably be mostly rails plugins. But, you never know. Something else might float in.

If you're interested in following along, subscribe to giraffesoft, the blog. Or, if like me, you don't really use your feed reader anymore, follow @giraffesoft on twitter or github! See you later with the first project.

One Line Tests Without the Smells

Jan 16, 2009

Everybody loves one line tests. That's one of the great things about shoulda. It allows you to test a lot of simple functionality using one liners, called "macros". Unfortunately, though, there are a lot of things about shoulda macros that make them less than ideal as a testing mechanism.

Firstly, they get called at class scope, so the variables from your setup blocks isn't available. A quick read through the archives of the shoulda mailing list proves that this confuses people. The workaround for this issue is a major code smell: string programming.

For example...

should_redirect_to 'post_url(@post)'

Here, 'post_url(@post)' is evaluated at the scope of the controller. A parse error in your string wouldn't point you to your should_redirect_to declaration. It would point you to a line inside of shoulda, where the instance_eval actually happens. Not good. There are lots of other reasons string programming sucks, too, but I won't get in to them here.

The second issue with macros is that failures often happen deep inside of shoulda. Backtraces often become completely useless, forcing you to open up shoulda and wade through it to figure out why your code is failing.

One More Problem

context "A blog post" do
  setup do
    @author = create_person
    @post   = create_post :author => @author

  should "be editable by its author" do
    assert @post.editable_by?(@author)

Don't see the problem?

Why bother writing self-documenting test code if you always have to explain it to the reader? Test names are essentially glorified comments and comments are frequently code smells. Furthermore, all the extra code required to create a test (should "" do ... end) almost certainly discourages one assertion per test. If the assertion is one line and the code can explain itself, why bother with all the other crap?

The Solution: Zebra

context "With a blog post" do
  setup do
    @author        = create_person
    @somebody_else = create_person
    @post          = create_post :author => @author

  expect { be_editable_by(@author) }
  expect { @post.not_to be_editable_by(@author) }

But, what about the test name?

I'm glad you asked. This is where zebra gets really cool. The above code will create tests with the following names:

"test: With a blog post expect"
"test: With a blog post expect @post.not_to(be_editable_by(@somebody_else))"

Now, that is self-documenting code.

The right tool for the job

The cool thing about zebra is that it's an extension to context or shoulda and matchy (shoulda support coming very soon). If you have a test that belongs in an it or should block, with a big, old-fashioned test name, you can have it. Just use should or it. When you have a short, self-documenting test, use expect. Best of both worlds.

Get It!

`sudo gem install giraffesoft-zebra`

Speaking at CUSEC

Jan 11, 2009

If you live in the area and you're not going to CUSEC, you're crazy. The speaker lineup is absolutely amazing, including such names as...

  • Richard Stallman (yes, the famous RMS)
  • Dan Ingalls
  • Avi Bryant (if you've never heard Avi speak, you don't know what you're missing)
  • Giles Bowkett

Sounds expensive, right? Wrong.

All three days of the conference cost only $150.

Also, I'll be speaking Friday morning. Here's the abstract...

Storming the Java Bastille: a political introduction to dynamic languages

Working with rigid, static languages is like living in a communist dictatorship. Your freedom is extremely limited, there’s a lot of bureaucracy, and it’s often painful.

Working with dynamic languages, by contrast, is like living under libertarian rule. The government is small and doesn’t like to make decisions on your behalf. They provide you a lot of freedom and leave it up to the community to decide how best to do things. But, with great power comes great responsibility. Dynamic languages are like a chainsaw: powerful, but watch out for your extremities.

In this code-heavy session, we’ll take a look at the pros and cons of both styles, hopefully without losing any limbs.

If last year's CUSEC was any indication, this is going to be a great conference. Hope to see you there!

François Beausoleil and Daniel Haran join GiraffeSoft and our new site launches!

Nov 22, 2008

Let's start with the biggest news. This has been a long time in the making. At Rubyfringe, Francois, Daniel, and I had a long conversation about how great it would be if we could work together.

We gave it a shot for Rails Rumble, in which we built what does this error mean. We won for most useful, which we took as a sign that our idea of working together was a good one.

I can't express how excited I am and privileged I feel to be working with François and Daniel. They are truly top notch in every way. Their contributions to open source, previous projects, and resumes speak for themselves.

If you don't know these amazing guys, you can read about François and Daniel on our new site!

The New

Thanks to our good friends over at Arktyp and the awesome Webby framework, we finally have a real website. With plenty of work coming in over the last 14 months or so, I was having a really hard time motivating myself to put a full fledged site together.

Well, now that the team is growing, I finally got my shit together to finish up the site. I'm really excited to be able to show off some of our past projects and open source work. We've got a full range of services products and we're finishing up an app or two that you'll hear more about soon. So, check it out. It's awesome.

This is just the beginning. There'll be some more giraffesoft announcements soon, so stay tuned.

MoR => open_source_hackfest?

Nov 04, 2008

Over the last year or so, I’ve really enjoyed Montreal on Rails. There have been lots of awesome presentations, and general good times had by all.

When Carl asked for feedback on Uservoice, a lot of people seemed interested in more presentations geared towards newbies. While presentations can be really interesting and useful, most of the best programmers I know learned their craft through open source work. There’s no better way to improve your skills (not to mention your profile in the industry).

So, I’ve been thinking about organizing a monthly, open source hackfest.

It would be an informal evening where people could come and work on FLOSS. If you have a project, it would be a great place to get help from some of your local ruby gurus, or just an excuse to work on your project. If not, it’d be a great place to pick up a project and learn from other developers.

At the end of every evening, we could have a few lightning talks (5mins), to give people the opportunity to show off what they’ve been working on. Or not, if nobody wants to speak. Only rule: no preparation allowed. No slides. Nothing.

Think of it sort of like Zed Shaw’s Freehacker’s Union, minus the hazing ritual for new members.

So, what do you think? If we turned MoR in to an open source hack fest for a couple of months, would you miss the presentations? Is this something you’re interested in trying out? Will you be offended if we open it up to all things Ruby, instead of just rails?

Please direct your comments to the post on the MoR blog.

Off-Topic: Café Myriade

Oct 27, 2008

I'm a coffee geek. About half my kitchen is dedicated to espresso gear, and various tools for brewing filtered coffee, including a vacuum pot, and the newest addition, a cafe solo. Anyway, being a coffee geek, I also hang out with coffee geeks. And, it just so happens that the guy who taught me everything I know about coffee, competitive barista Anthony Benda, just opened his very own cafe.

If you like coffee, you owe it to yourself to check this place out. There's a huge difference between the crapstuff you get at starbucks (which we lovingly refer to as 'dirty water'), or even your local coffee shop, and what they're brewing over at Myriade.

The rest of this article is cross-posted from Daniel Haran's blog.

Daniel and I went to visit Cafe Myriade (praized page) for their Grand Opening. He grabbed this shot after a friendly latte art challenge:

Latte Art at Cafe Myriade

Myriade is worth visiting if you care about either coffee or chocolate. Baristas Anthony Benda and Scott Rao are both obsessive about coffee: Anthony is a competitive barista and Scott is the author of The Professional Barista’s Handbook.

Seeing them at work refining techniques and recipes is amazing and inspiring; they had received a new type of coffee maker akin to a french press and were testing different quantities and steep times.

For chocolate fanatics this is the only retail spot I know of in Montreal that carries SOMA chocolate, including their hot chocolate. SOMA is one of only 2 chocolatiers in Canada that make their own chocolate; I religiously visit them every time I’m in Toronto.

One of Anthony’s signature drinks is a coffee with melted chocolate. For his last competition he tried over a dozen different chocolates to find the perfect fit. Rather than being satisfied with his choice, it only seems to have ignited another obsession.

Cafe Myriade is located at 1432 Mackay, close to Maisonneuve at Guy-Concordia.

Our Rumble App: What Does this Error Mean?

Oct 20, 2008

Francois, Daniel, and I (and Mat, in spirit) spent the weekend rumbling. It was a great time coding with these two superstars, but you don't care about that.

Our app is called what does this error mean?. We all see error messages, and until now, the best way to look for solutions to those error messages was doing a google search. The problem with google searches, though, is that the results are ordered by the quality of the site, not by the quality of the solution. What does this error mean solves that problem, and a few more.

But, really, reading sucks. So, watch our screencast to learn all about wdtem.

Rails & Merb Integration

As part of our rumble project, we built plugins for both rails and merb that override their default development mode error messages. With our plugin installed, you'll see our logo below the error message. Simply click on the logo to automatically jump to a what does this error mean search!

For rails:

$ script/plugin install git://

For merb:

sudo gem install what_does_this_error_mean-merb

Please Vote for Us!

If you think our app is cool, please consider voting for us once rails rumble voting starts! We don't know the url yet, but have been told to point people to our team page.

Watch François Beausoleil Train for Rails Rumble

Oct 15, 2008

My Rails Rumble team is all set to win, because we trained the hardest. Check out the video François made of some of his hard work preparing for the competition.

RailsRumble 2008 Training from François Beausoleil on Vimeo.

Blank: A Starter App for r_c and shoulda Users

Oct 10, 2008

Bort was released recently. Peter Cooper speculated that " could well catch on as the de facto bare bones / generic Rails application". But, what about us non-RSpec users? There are dozens of us, I tell you. Dozens!

We build a lot of apps at GiraffeSoft — we love to experiment with whatever ideas excite us on any given day. We're all sick of editing restful_auth code, and moving tests over to Shoulda and the one assertion per test pattern. Bort doesn't suit our needs. So, blank was born.

Right now, it's pretty simple. It has authentication, and forgot password. That's about it. But, it's no biggie. Since blank creates your new app as a git repo that shares history with blank's repo, you can pull in changes we make at any time. So, when we finally get around to implementing openid support, you'll get it for free, if you start with blank.


All of our standard tools (and rails) are vendored:

  • active_presenter
  • andand
  • attribute_fu
  • hoptoad
  • mocha
  • rake
  • restful_authentication
  • ruby-openid
  • will_paginate


Installing blank is as easy as running a rake task. Except that blank uses thor instead, because it’s the new hotness, and it supports remote tasks.

Just install thor:

$ sudo gem install thor

…then install blank’s thor tasks:

$ thor install

…then you’re ready to create a new app with blank:

$ thor blank:new_app the_name_of_my_app

That’s it! The thorfile will display a couple of rake notes where you should replace blank app with your app's name. Also, you'll want to fill in your hoptoad API key in config/initializers/hoptoad.rb.

If we improve the thor file, all you have to do is run:

$ thor update blank

before creating your next app, and you’ll get the changes automagically.


All development will be done at the github repo. Fork away :)


Blank was created by me, with contributions from Daniel Haran.

resource_controller 0.5

Sep 03, 2008

I've been really bad about releasing new versions of r_c. The last version that I actually released officially was 0.2. Well, enough has happened that it's time to get official about things.

What's New

  • Singleton controllers are now supported, thanks to Bartek Grinkiewicz (grin). Just inherit from ResourceController::Singleton, and you'll get a set of default action options that makes sense for a singleton controller. As always, there's more info in the RDoc.
  • It's possible to properly inherit from r_c controllers now. However, you have to use the declaration syntax (calling resource_controller in your controller's class definition, rather than inheriting from RC::Base) to make it all work right.
  • The flash helper now accepts a block, which means that you can interpolate in your flash messages. (i.e. create.flash { "#{@post.title} successfully created!" })
  • Thanks to Jonathan Barket (his github is here), the scaffold_resource now generates RSpec specs, if RSpec is installed in your app.
  • Lots of bugs and little issues were fixed.
  • All r_c development is officially moved to github. Follow the repo here, if you're not already.
  • From now on, r_c will only be supported as a gem dependency. Gem plugins are the new hotness; love them.


Lots of awesome people contributed to this release.


As I mentioned before, Spree, the "Open Source Commerce Platform for Ruby on Rails", has moved to r_c. That also means that the Rails Envy guys' awesome new screencast site Envy Casts is powered by r_c. Since spree is open source, it makes a great example app for r_c. If you're looking for examples, the Spree source is a great place to start.

Get It

Add it as a gem dependency:

config.gem 'resource_controller'

Install it as a gem manually:

$ sudo gem install resource_controller

Or grab the source:

$ git clone git://

As always, the RDoc is available at

Rails commerce platform Spree moves to resource_controller

Aug 12, 2008

Spree wanted to become RESTful, so they chose resource_controller to help them keep DRY. At least some of those changes now seem to be merged in to master.

Wearing Out My Delete Key

Aug 02, 2008

One of the great things about pair programming is getting to see how somebody else approaches a problem. One can learn a lot by observing another programmer's thought process. For example, the first day of working with Mat, I learned that some programmers use something called a 'debugger' to troubleshoot problems in their code.

Jokes aside, I think every programmer should take every pairing opportunity they get. This article is about a realization I had while pair programming with Daniel Haran on the way to RubyFringe.

Evidently, not everybody is quite as hard on their delete key as I am.

Always Delete

At one point during the train ride, Daniel looked at me, and said, "Wow — we're deleting more code than we're keeping!" To which I responded, "Yeah, of course we are. Don't you always?"

This illustrates one of my strategies for writing great code. Trying to get it right the first time is futile. So, I stopped trying.

I'd much rather get it out of my brain, and in to my editor, no matter how crude; see my tests go green, and refactor it immediately. Mostly, that means deleting the entire implementation, and starting from scratch. Let me tell you why.

Speculating Sucks

Trying to perfect an implementation in one's mind is a form of speculation. It is extremely difficult to judge the readability of something you cannot read, or the performance of something you cannot run. Such a judgement may be possible in some cases, but it will never be a replacement for actually seeing, reading, or running the code.

A good portion of the time, the solution that seems crude in my mind turns out to be perfectly adequate once I see it in my editor. Sometimes, it's even fairly elegant. When my first try sucks, refactoring leads me to the right solution far more quickly than speculating ever could. This is because identifying problems in my initial implementation is so much easier when I can actually see, and manipulate it.

A great programmer certainly has a better sense for which solutions are going to be more readable. That programmer will usually get closer on the first try. But, in my experience, the very best programmers are the fastest at trying out several different solutions and choosing between them, or quickly iterating on one, until it's perfect.

Exponential Productivity

The elegant implementation you see when you read a great programmer's code is often the third or fourth try. The great programmer is often more effective because they can implement several solutions in the same amount of time it takes the average programmer to implement one.

All the while, they are improving their base of experience by having real interactions with many real solutions to the same problem. That allows them to arrive at a better solution to the problem next time. That, I believe, is one of the reasons programmer productivity is exponential.


Since I started working with highly productive languages like Python and Ruby, I have come to believe that code should be disposable. The easier it is to refactor, or even rewrite (a form of refactoring), the better. That's one of the reasons I am a proponent of dense languages. Anything that can't be easily refactored due to its size is a major liability.

It seems pretty clear to me that programming is best learned through doing. That's why so many philosophy graduates, and jazz musicians are better programmers than unmotivated kids with software engineering degrees. Writing code is a lot more valuable than thinking about code.

The faster you can write, and evaluate code, the faster you can get better, which accelerates your ability to write and evaluate code. That's exponential growth.

Tweet From the Command Line

Jul 31, 2008

I'm a command line junkie. I crave the command-line like addicts crave heroin. Okay, maybe a little bit less.

In an effort to spend more time on the command-line, and less time in my web browser, I created a little utility that allows me to tweet from the prompt. I call it tweet.

It's going to make me more productive. So, now I won't have to feel bad about twitter's victory. It will make you more productive too.

You use it like this:

$ tweet "I'm going to walk my dog."

You download it like this:

$ sudo gem install giraffesoft-tweet

You fork it from here.

See the README for the config file syntax.

That is all.

Introducing ActivePresenter: The presenter library you already know.

Jul 27, 2008

Presenters were a hot topic in the rails community last year. A lot of prominent bloggers wrote about using them, and the implementations they had come up with. Oddly, though, when I needed one a couple of weeks ago, I was unable to find a suitable implementation. Lots of articles — no code.

Let's answer the question on everybody's mind before we move on. Feel free to skip ahead if you already know the answer.

WTF is a presenter?!

In its simplest form, a presenter is an object that wraps up several other objects to display, and manipulate them on the front end. For example, if you have a form that needs to manipulate several models, you'd probably want to wrap them in a presenter.

Indeed, attribute_fu solves this problem for some cases. However, when you're dealing with unrelated models, or, really, any situation other than a parent saving its children, you're probably better off using a presenter.

Presenters take the multi model saving code out of your controller, and put it in to a nice object. Because presenter logic is encapsulated, it's reusable, and easy to test.

Want one?


Daniel and I wrote most of this on the train ride over to RubyFringe. It is an ultra-simple presenter base class that is designed to wrap ActiveRecord models (and, potentially, others that act like them).

ActivePresenter works just like an ActiveRecord model, except that it works with multiple models at the same time. Let me show you.

Imagine we've got a signup form that needs to create a new User, and a new Account. We'd create a presenter that looks like this.

class SignupPresenter < ActivePresenter::Base
  presents :user, :account

Then, we'd write a new action like this one:

def new
  @signup_presenter =

And a form:

<%= error_messages_for :signup_presenter %>

<%- form_for @signup_presenter, :url => signup_url do |f| -%>
  <%= f.label :account_subdomain, "Subdomain" %>
  <%= f.text_field :account_subdomain %>
  <%= f.label :user_login, "Login" %>
  <%= f.text_field :user_login %>
<%- end -%>

A create action:

def create
  @signup_presenter =[:signup_presenter])
    redirect_to dashboard_url
    render :action => "new"

Lastly, an update action:

def update
  @signup_presenter = => current_user, :account => current_account)
  if @signup_presenter.update_attributes(params[:signup_presenter])
    redirect_to dashboard_url
    render :action => "edit"

Seem familiar?

If you're using r_c, most of this comes for free with:

class SignupsController < ResourceController::Base
  model_name :signup_presenter

For more on complex forms in rails, and the presenter pattern, see my upcoming PeepCode screencast!


I have been sticking my presenters in app/presenters. If you want to do the same, you'll need to add a line like this to your environment.rb:

config.load_paths += %W( #{RAILS_ROOT}/app/presenters )

Get It!

As a gem:

$ sudo gem install active_presenter

As a rails gem dependency:

config.gem 'active_presenter'
Or get the source from github:
$ git clone git://

(or fork it at

Also, check out the RDoc.

Okay Twitter, You Win.

Jul 17, 2008

For the longest time, I've been telling myself that I'm going to stop using twitter. It's a waste of time, and I was just trying it out anyways.

Well, something keeps drawing me back to it. I keep tweeting. So, I've given in.

I actually spent some time styling my twitter page to look giraffesoftish. I even put the badge on my blog (to your right, underneath github).

Let this blog post serve as the (reluctant) official notice that I am on twitter. Follow me here.

Business Logic Bleeding in to Views and Controllers

Jul 16, 2008

I've been doing a fair bit of training recently — both while bringing Mat up to speed on the latest and greatest best practices, and going and speaking to clients' teams. A few concepts keep coming up, so I'm going to try to blog about them as they do. Here's the first one.

I see this all over client projects, and admittedly, some of my older ones:

<%- if @post.creator == current_user -%>
  <%= link_to "Edit", edit_post_path(@post) %>
<%- end -%>

Seems innocuous — the post can only be be edited by its creator. But, that is a business logic rule, so it belongs in your model (and your tests).

context "Posts" do
  setup do
    @creator      = create_user
    @post         = @creator.posts.create!(hash_for_post)
    @another_user = create_user

  should "be editable by their creator" do
    assert @post.can_be_edited_by?(@user)
  should "not be editable by another random user" do
    assert !@post.can_be_edited_by?(@another_user)
class Post < ActiveRecord::Base
  def can_be_edited_by?(user)
    user == creator

Why? A few reasons. First, because your model classes should represent a complete picture of your data's structure, and business logic rules. Second, a rule like that deserves testing, even if it's as simple as the one in this example. Finally, because later on, you might want admins to be able to edit posts, too.

context "Posts" do
  setup do
    @creator      = create_user
    @post         = @creator.posts.create!(hash_for_post)
    @another_user = create_user
    @admin        = create_user :admin => true

  should "be editable by their creator" do
    assert @post.can_be_edited_by?(@user)
  should "be editable by an admin" do
    assert @post.can_be_edited_by?(@admin)
  should "not be editable by another random user" do
    assert !@post.can_be_edited_by?(@another_user)
class Post < ActiveRecord::Base
  def can_be_edited_by?(user)
    user == creator || user.admin?

If we're using the can_be_edited_by? method all over our controllers and views, a change to the access policy shouldn't entail anything other than editing a couple of lines of code in our models and unit tests.


Here's another one I see all the time (especially in my code). This one tends to be harder to spot.

class MessagesController
  before_filter :login_required
    def project
      @project ||= current_user.projects.find(params[:project_id])
    def messages

The idea here is that we're using the association as an implicit access control mechanism. If the user row is associated with the project row, the user has access to that project. I know that there are several r_c users who have used this pattern, since it's so easy to implement with r_c. I've even recommended it. Ouch.

The problem with this pattern is lack of encapsulation. There are a good handful of situation in which you might want the access rule to change such that you'd have to go and change every call to current_user.projects. Which is ugly.

What if you decided to make ProjectMembership a join model, for example? Rather than actually deleting membership rows, you decide you'd like, for record keeping purposes, to have a revoked_at field which denotes a former membership made invalid. You might think — no problem, I can just change the conditions of the association.

class User
  has_many :projects, :through => :project_memberships, :conditions => "project_memberships.revoked_at IS NULL"

Aside from ambiguous naming, this remains an incomplete solution. At some point, you're going to decide that admins have access to all projects. You could add that condition to your SQL, too, but that approach is problematic for the same reason we're talking about, here: the definition of an admin may change. So, we need a different strategy.

class Project
  has_many :project_memberships
  has_many :users, :through => :project_memberships
  named_scope :with_active_membership_for, lambda { |user| { :include => :project_memberships, :conditions => ['project_memberships.user_id = ? AND project_memberships.revoked_at IS NULL',] } }
  def self.for(user)
    user.admin? ? self : self.with_active_membership_for(user)

This isn't a perfect solution, since the definition of a revoked membership is living in the SQL; ideally that would be defined in ProjectMembership. But, with a join model, I'm not sure of any other way. So, at least in this instance, we can use the Project.for method in our controllers, and views, and not have to worry about a change in the project access rules causing a need for major refactoring later on.

class MessagesController
  before_filter :login_required
    def project
      @project ||= Project.for(current_user).find(params[:project_id])
    def messages

But, Not Always

One could look at these examples and decide to engage in reductio ad absurdum. Why not encapsulate the messages collection, they'd ask?

The rule I tend to stick to is that if I know that there's any kind of access control policy being enforced on an object or collection, it gets encapsulated, and tested in the model.

In the fictional examples above, the assumption was that provided access to the project, a user would be allowed to access all of messages within. In that case, I wouldn't have encapsulated, because there would have been no rule to encapsulate.

As soon as there is a policy, though, it belongs in tested methods in your models.

The controller's responsibility in all of this is to control access to a resource in the sense that it actually performs the check, and redirects the user to a login screen, or shows an access denied message if they are not entitled to perform said action on said resource. The controller is not, however, responsible for deciding who it is that gets access; that's the model's job.

Like a bouncer at the door of a nightclub, your controller shouldn't make the rules, it should only enforce them. Luckily for you, cute 17 year old girls won't have the same effect on your controller that they might on a bouncer.

Mat Martin Joins GiraffeSoft!

Jul 06, 2008

Mat has been a staple at Montreal on Rails since day one. Hanging out with him was always fun; we had a ton of interesting conversations. Whether we were discussing testing frameworks, or debating the latest raganwald post, chatting with Mat was always interesting, and fun. We hung out a lot at CUSEC, which is when Mat told me about his C# job — he wasn't... ummm.... loving it. That's when I started devising my plan to hire him, but alas, it wasn't time. Fast forward a couple of months, and finally, it is time!

But, enough about me, here are some fun facts about Mat.

  • Mat is a language geek. His presentations at MoR have been about JRuby, and rubinius. He can often be found reading Steve Yegge, and Raganwald. Don't be surprised if Mat leaves GiraffeSoft at some point in the future to escape from "ruby hell" for the purity of lisp.
  • Mat *really* loves testing. much that at a former job where testing wasn't part of the culture, Mat wrote his code test first, and then trashed the tests when he checked in his code, since he wasn't allowed to check them in. He'll fit right in here :).

What Mat'll be Working On

As far as client work is concerned, Mat will be working on a couple of stealth mode projects that we'll hopefully be able to blog about in a couple of months. Client work, however, will only account for around 50% of Mat's time. So, what about the other half?

I'm going to try something totally crazy: 50% time. Yeah, that's right, fifty percent time, 5-0 — eat your heart out, google. Mat will have half his time to work on whatever he wants. He may use his time for blogging, working on Giraffetyp projects, open source, or developing some of his own ideas. Investing time and effort in some of my ideas, and open source projects has been a huge win for my skill set, and profile, and I'm taking a serious bet that it'll do the same for Mat.

So, Look Out

Mat has already proven himself to be a great blogger, and launched his first open source project. Nearly every article he's written over the last few months has been featured on programming reddit, hacker news, etc. His first open source project even got a shout outs from the github blog, and the rails envy podcast. Armed with ample time every week to work on that stuff, I'm betting Mat will become a prominent figure in the community in no time.

Here's what Mat has to Mat has to say about his move.

r_c, meet RSpec

Jun 20, 2008

Jonathan Barket just sent me a patch that gets resource_controller's scaffold_resource generator to spit out RSpec specs! I quickly merged it in to master, and wanted to announce it to the world, since it's something I've been hoping to get added to the plugin for a while.

r_c's scaffold_resource generator now supports vanilla test/unit, Shoulda, and RSpec for tests, and erb, and haml for templating. It will sense which of those plugins or gems you have installed in your app, and generate accordingly — it's absolutely 0 configuration.

So, if you're an RSpec user, grab the latest master from github, and get generating! :)

Gems I've Known and Loved #1: Expectations

Jun 08, 2008

Recently, I realized that something was wrong with all of the conventional testing and specing libraries.

One of the things I loved about RSpec, when I read thin's specs, for the first time, was how easy it was to read the assertion code:

some_var.should == 5

But, when I tried using RSpec on a project, it continually frustrated me that I had to describe each test twice (by naming it, and writing pseudo-english code):

it "should augment the count by one" do
  some_var += 1
  some_var.should == 5

So, I went back to the familiar Shoulda. But, then, a couple of weeks ago, I came to a realization: Shoulda has exactly the same problem — it's just hidden under the awesome set of macros. Hell, even test/unit has this problem. Test names are comments. And, frankly, many, if not most of the tests that I write (like the example above) just don't need commenting — they're simple enough that a comment is unnecessary verbosity. A comment may even be distracting the reader from actual test code. Then, a couple of days later, Jay Fields blogged about exactly that. That's what made me take a look at his testing framework: expectations.

Naming Tests with Code

When I was working with RSpec, I started to wish that I could do something like this, to avoid the naming penalty:

before do
  some_var += 1

some_var.should == 5
some_var.should be_an_instance_of?(Fixnum)

Shoulda can do something like that, but it doesn't work in all situations, because it gets called at the class scope, so you have to go guessing about what the name of the instance variable will be. Besides, since we're sticking to one assertion per test, couldn't the assertion be the name of the test?

some_var.should == 5 do
  some_var += 1

Oops, we just derived the syntax for expectations.

Assertions are King

Except that expectations takes it a few steps further. It provides a ton of niceties for describing your tests in sensible language, and great mocking facilities. Our last example would look like this:

expect 5 do
  some_var  = 4
  some_var += 1

If we wanted to test for Fixnum, like in our second assertion, above, there's a nice shorthand for that.

expect Fixnum do
  some_var  = 4
  some_var += 1

As I write this, I can hear somebody out there screaming: "That's not very DRY!!". Even though I prefer the duplication in this particular example for readability reasons, let's DRY it up, just for fun.

setup = lambda do 
  some_var  = 4
  some_var += 1

expect(Fixnum, &setup)
expect(5, &setup)

Looks a lot like what I was wishing for with RSpec, doesn't it?

Next, let's look at a mocking example. One of the things that makes mocking a little bit weird with conventional testing and specing frameworks is that assertions come last, but mocks come first. So, when you're trying to follow the one assertion per test pattern, you end up with two different flows: setup, assert, for assertions, and: mock, setup, with mocks. Since assertions always come first with expectations, there's only one possible flow, making tests more readable.

expect do |my_mock|

Finally, spec junkies can even write expectations using BDD-style language.

expect do |process|
  process.finished = true


Expectations is pretty new, so I haven't yet seen any niceties for testing rails. It would be great to be able to do something like:

expect controller.to_render('index') do |c|
  get :index

As a big fan of Shoulda, I'd love to see some of the same types of macros for expectations, too (not in the core framework, but as an add-on).

Get It

$ sudo gem install expectations

Check out the RDoc for more examples, and a full set of documentation.

Custom Protocols Considered Harmful: Thin & Rack are the Platform of the Future

Jun 01, 2008

While I was figuring out how to embed a webserver in to my new open source project, I had an epiphany: REST is the real deal, and thin, rack, and invisible are the platform of the future. It turns out that most daemon processes require a lot of code just to implement a protocol, daemonize, and manage things like PID files. From what I can tell, most of the code in many of these utilities is there to do things that aren't their primary responsibility.

Take Starling, Twitter's queueing agent, for example. A quick sloccount of starling's lib directory shows that there are 737 lines of code in that gem. Further investigation shows that the service (that is, everything but the client) is comprised of 5 files.

  1. server.rb is responsible for things like starting the server, processing requests at the socket level, and managing Starling's thread pool. server.rb contains 146 LoC.
  2. runner.rb is responsible for parsing command line options, daemonizing, switching to a specified user and/or group, forking, etc. runner.rb contains 199 LoC.
  3. handler.rb implements the memcache protocol. It contains 143 LoC.
  4. persistent_queue.rb actually implements — well — the persistent queue. It accepts new queue items, and stores them on disk for later retrieval. persistent_queue.rb contains 103 LoC.
  5. queue_collection.rb is a proxy to a collection of PersistentQueue instances. I took that last sentence from their documentation, because I'm actually not 100% clear on what this file does. At any rate, queue_collection.rb contains 79 LoC.

As we can see, most (66%) of Starling is completely unrelated to queueing. Sure, implementing the memcache protocol was a good idea, since it's well supported, but at what cost? The author of Starling was forced to write quite a bit of new code.

All new code is immature. There are always bugs to fix, and issues that arise. But, this kind of new code has got a much bigger problem: most of it is unrelated to Starling's domain, and it isn't generic enough to be useful for anybody but them. So, Starling's daemonizing code, and memcache implementation are only being looked after by Starling's developers.

While Starling is fairly popular, it's a pretty specific tool. So, the base of contributors, bug finders, and bug fixers is far smaller than for, say, a web server.

Thin & Rack as a Platform

A web server needs to do all of the same ugly stuff as Starling. It needs to know how to daemonize, and speak a protocol (REST!). The difference? That stuff is a web server's primary responsibility.

Thin gives you event-driven listening, daemonizing, and stability — all for the price of: '', 5432 do  
  map('/') { run }

Thin's mailing list has over 250 subscribers, and nearly 100 people watch it on github. Plenty of people are running their websites on thin (and, those are just the ones we know about). And, while most of the work is still being done by MA, there have been plenty of contributions from other people. And, given that a web server is a much more widely needed application, thin has a much larger potential user base than Starling.

Piggybacking on thin's technology is a good idea, and it's incredibly easy to do. Thin has built-in support for rack, a generic layer for connecting a web server to an application. It does all the stuff that every web framework used to have to do. Rack makes writing a framework insanely easy.

Invisible: A Web Framework in 40 sloc

The last time I saw MA speak, he coded a version of invisible live, during an hour long talk. Earlier today, I forked it, and made some changes to tailor it toward writing web services that are consumable by ActiveResource. It is an MVC framework, but my fork has no support for templating, since most responses will be of the to_xml variety.

The controllers look a lot like rails controllers. Like in merb, the return value of an action becomes the response body. However, things like changing the status code are decidedly more bare bones than in most frameworks. Also, there is no user-specified routing at all: all routes map to /controller/:id, with the action being dependent on the REST method.

These two hashes determine which action gets called (i.e. exactly how ActiveResource expects them):

MEMBER_ACTIONS     = { 'PUT' => :update, 'DELETE' => :destroy, 'GET' => :show }
COLLECTION_ACTIONS = { 'POST' => :create, 'GET' => :index }

If you follow invisible's conventions (and, it's hard not to), your service should be consumable by active_resource, out of the box. That means you get a free client!

Since we all know that a framework is nothing without a sample app...

Scrawny: A Lightweight, RESTful Persistent Queue on Top of thin, rack and invisible in < 60 sloc

Before we get started, I want to make something clear from the get go: THIS IS A PROOF OF CONCEPT — I KNOW IT ISN'T FINISHED — DON'T USE IT!

It's unfinished, but scrawny displays the amazing power of thin, rack, invisible, and active_resource. With an incredibly small amount of code, I was able to implement a stable, lightweight, persistent message queue, and its client. The code for doing all of the ugly stuff I talked about earlier in this article is a mere ~10 lines. I was able to focus on actually creating the queue. The other stuff was an aside, as it should be.


May 26, 2008

I am working on integrating a third party service in to a web app for a client — a task I can always count on to be at least somewhat painful. This particular service's API, however, is worse than painful. Their API is a failure in nearly every possible way — and some new and surprising ways I could have never imagined. After a couple of weeks of working with this API, I have compiled a list of DON'Ts, FAILs, and OMGs for your enjoyment.

How to FAIL at API

Sample Code

On their libraries page, they list an "unfinished" ruby library. I thought: no problem, even if the implementation isn't complete, at least this is something I can work with — a starting point. Boy, was I wrong.

FAIL: The library doesn't work, isn't tested, and is impressively unreadable. Is it a gem? There's no gemspec. Is it a rails plugin? There's no init.rb. Out of the box, it's not even clear how to test it in a console. Once I did manage to get it running in a console, it was pretty obvious that nothing worked. Worst of all, the way the error handling was set up, it kicked me out of irb every time a call failed (which was every time, because the code was FUBARed).

Lesson Learned: DON'T provide untested client code that doesn't even come close to working. It's better to provide nothing at all than to waste your client's time working with an unreadable mess of garbage that leads them nowhere. If you want to be in the business of distributing code, learn the language, and make sure you're distributing something that works.

Error Messages

There is absolutely no mention of error codes in the documentation for this particular API. So, I assumed that their data validation was pretty loose, and that as long as I satisfied the required parameters for a call, it would succeed. So, when I re-implemented their API client in ruby, I didn't build in any exception handling beyond the http communications layer. I was pretty confused when nothing was working.

OMG: There are error codes; they're just not documented. I dumped the messages out to inspect them. Maybe I had goofed something up on my end? Apparently the authors of this API decided that their error messages are so good that documenting the error codes was an unnecessary task. Unfortunately, my code doesn't understand english particularly well.

Lesson Learned: Document your error codes! From an implementer's point of view, missing error codes almost definitely means that your API is going to be unreliable, and thus, useless for any important task.


I'm no guru, but I know enough about security to know that asymmetric encryption (or public-key cryptography) is secure as hell; that's why https, ssh, pgp/gpg, etc, choose it as their supporting technology. Using a public / private key pair, asymmetric encryption allows two parties to create a secure connection over an otherwise insecure channel without worrying about interception.

Data encrypted with a given public key can only be decrypted with the corresponding private key. So, unless the private key has been compromised, it is impossible for the communication to be decrypted by a third party. The sender of data can also use their private key to sign the message, verifying its identity; the signature can then be decrypted with their corresponding public key. It's quite an elegant system, actually.

It has some traction, too: it's secure enough for your bank, operating system vendors of all kinds, secure email vendors, and more, but apparently, the implementers of this API didn't feel public-key cryptography was adequate.

OMG: Instead, they chose to use symmetric encryption. That means, there is one single key that both parties posses, and use to encrypt, and decrypt their communications. Naturally, the flaw in this approach is that somebody in possession of the key can intercept communications with ease. So, when making use of symmetric encryption, the key has to be transferred across a highly secure channel.

FAIL: They sent me the key in an unencrypted email.

Lesson Learned: If you're going to diverge from the industry standard somewhere, especially when it comes to security, make sure to have a good reason for doing it. It wouldn't have taken much research to determine that https is an extremely secure protocol.

It's supported on nearly every system in the world. So, piggy backing on https would have meant that clients wouldn't have had to re-invent the wheel just to use this particular API.

The fact that they weren't aware of that in advance suggests that they don't understand the technologies that they're using, which is going to throw up a gigantic red flag for anybody working on integrating with their service.

EPIC FAIL: The funniest part about the whole encryption thing? HTTPS only uses public-key crypto to create a secure channel, in which to exchange the symmetric key. For performance reasons, the rest of the data is exchanged using a symmetric algorithm. The result is effectively the same, except that https is actually secure, whereas this symmetric key may have been compromised by somebody sniffing my network traffic when I grabbed my email that day.


Remarkably, there is more to this story, but it'll have to wait for another day. For now, what are some of your integration horror stories?

Introducing has_browser: Parameterized browse interfaces for your AR models.

May 19, 2008

Pretty soon after starting to use has_finder, I needed to write a browse interface. That is, given a set of parameters, return all the models that match. With has_finder, it is possible to compartmentalize those parameters in to small chunks, each with names that make sense in your model's domain. You can then chain them together to get their combined scope. The only problem iswas calling them when you aren't aware, in advance, of which finders need to be called.

Having had this problem a few times, and being sick of solving it over and over again, I decided to extract a general solution from a project I'm working on.


It's a simple plugin, with a simple syntax. Using the canonical blog example, let's imagine we want to create a browse interface to posts. We'd want the user to be able to browse by category, author, or tags, but not to be able to access any of the other finders on the Post model for obvious security reasons. To set up has_browser, we'd do something like this:

has_browser :category, :tags, :author

Then, assuming the has_finders are already written, the posts can be browsed as follows:

Post.browse(:category => 'Testing', :tags => 'activerecord', :author => 'james')

In that example, each of the finders requires an argument; has_browser also supports finders that don't. As long as the argumentless finder is present in the browse hash, it will be called:

has_browser :category, :tags, :author, :without_args => [:order_by_date, :order_by_number_of_comments]

Post.browse(:category => 'Testing', :tags => 'activerecord', :author => 'james', :order_by_number_of_comments => 'true')

Browse can also be called from association_proxies. For a multi-blog platform, we could easily restrict browsing of posts to the current blog:

@blog.posts.browse(:category => 'Testing', :tags => 'activerecord', :author => 'james', :order_by_number_of_comments => 'true')

Since has_browser returns the same proxy as has_finder, it is possible to further restrict the results of a browse by chaining finders after the browse call. With our blog, for example, we'd probably want to restrict browsing to published posts.

@blog.posts.browse(:category => 'Testing', :tags => 'activerecord', :author => 'james', :order_by_number_of_comments => 'true').published

Note: It is not possible to chain finders before the browse call.

Finally, like has_finder, has_browser is compatible with will_paginate out of the box.

Get It

Releases of has_browser will be available as a gem, which you can freeze in to your rails app (init.rb included):

$ sudo gem install has_browser

Development, of course, is at github.

Experimental resource_controller Feature: Custom Action Discovery

May 08, 2008

A few weeks ago, Nate Wiger emailed me to ask whether I was interested in a patch for r_c. Evidently, it is possible to determine which controller actions need to be created by examining the routes that are pointing to it. Nate had also noticed that most controller actions follow a pattern:

  1. Load or build an object or a collection (i.e. Post.find(1) or[:post])).
  2. Optionally do something with the object(s) (i.e.
  3. Optionally set the flash.
  4. Render a response.

Put those pieces together, and it's possible to create a controller abstraction that is aware of what's routed to it, and sets up a set of sensible defaults, even for custom actions. As you might have guessed, Nate and I have done exactly that.

For the 7 default RESTful actions, things haven't changed much. The only real difference is that you can now change step 2 (refer to list above), by passing a block to action.action.

create.action do

The action block's return value determines whether the success or failure callbacks (i.e. success.flash vs failure.wants) are triggered.

For custom actions, you'll now be able to use the same DSL you're used to for customizing default actions. For example, if you had a custom action called update_collection, you might put something like this in your routes:

map.connect '/posts', :controller => 'posts', :action => 'update_collection', :conditions => {:method => :put}

Based on that route, r_c will automatically be aware of your action, and create a basic skeleton for it, following the pattern of the list above. For a collection update method, you might want to customize your action like this:

update_collection do
  before { @posts.each { |p| p.update_attributes params[:posts] } }
  update_collection.action { }

That's all you'd have to do. r_c would automatically load the objects (including any parent, respecting polymorphism), and render using its internal helpers.

Note: It is also possible to modify loading by setting a block for the build accessor, which corresponds to step 1.

Highly Experimental

This is highly experimental software. I'm not sure how much I love it, since it's a bit on the magical side for my tastes. However, it is kinda neat to have your controller be aware of which actions to create, based on what is routed to it. I have been having a bit of fun playing around with this.

So, please give it a shot, and let us know how you like it on the mailing list.

Get it from github (checkout the automatic_route_discovery branch). If you aren't using git, you can get a tarball here.

Rails Training w/ James Golick & Other Rails Ninjas

Apr 29, 2008

This past December, a friend of mine, Peter, wanted to improve his rails skills. I had been asked by a few people about teaching some rails training sessions, but wanted to give it a beta test first. When I was in Toronto over the holidays, Peter and I gave it a trial run, which went great. Since then, I've been dying to offer ruby / rails training to a wider audience.

How It'll Work

The sessions will be based on the format that I tried out in my beta session with Peter. We'll start by covering some advanced rails fundamentals, with time for questions, and plenty of time to go off on long tangents about whatever you might want to learn (like, how to write a plugin, or how some of the rails internals work). The remainder of the session will be spent coding.

During the coding portion of the sessions, the instructor(s) will pair program with each of the participants in a rotation. Think of it as an opportunity to work extremely closely with a rails expert. We'll be able to apply what we've learned in the earlier portion of the session. We'll work on your real problems; that way, we'll be teaching a custom course that's custom tailored to you, and you'll walk away from the experience with some code, to boot!

Participants will be asked to bring two projects: one in progress (or finished), and one idea.

  • The project in progress serves as a tool for analysis of where you're at, and what kind of coding practices you have in general. We'll work out some possible refactorings for that project, and talk about how you'd make use of advanced rails techniques to improve its readability, maintainability, etc.
  • Next, we'll work on your project idea. The rest of the session will be spent architecting, and actually coding this project. We'll help you pick the right plugin set, model your data, and anything else.


  • I'd like to offer the first session some time in early July, most likely over a weekend.
  • The sessions will be held in Montreal, unless there's a compelling reason to offer them somewhere else, like, a group of participants who all live in the same city.
  • Price is still TBD; I've got some potential sponsors (which would offset the price for participants), but nothing confirmed yet. There would certainly be a discount for groups or companies.

Finally, I intend to keep the instructor to participant ratio extremely low to support plenty of one-on-one time. So, if there is sufficient interest, I'll get some of my fellow Montreal rails ninjas in as additional instructors. I've got some awesome people in mind. Who knows — you might even get scouted by one of the local rails shops!

If you're interested, send me an email (top right) for more info.

Introducing My New Company: Giraffetyp

Apr 28, 2008

Everybody loves their iPod, and rightly so. It's one of the greatest technology products of all time. The iPod experience is the sum of many parts: design, development, etc. One of Apple's biggest strengths is their ability to put all of these pieces together effectively. If the iPod was ugly, unusable, had buggy software, or felt lousy in your hands, it wouldn't be the same device; it would be mediocre like all of the other portable mp3 players that came before and after it.

On the web, shops are usually run with a strong focus on one piece of the puzzle. Some firms are development-centric, others are design-centric, others still are business-centric. A shop's core skill usually trumps everything else. Even when they hire consultants to fill the gaps in their skill set, shops tend to keep focused on their core competencies, failing to properly execute the work of the consultants. It seems to happen everywhere — even in the best shops.

So, having the best people work on the product isn't enough; to achieve greatness, there has to be tight collaboration between all parts. The developers must respect and understand the designers, and likewise. Most importantly, no one concern can trump another.

Come read the rest of this article at the Giraffetyp blog. Note: I'm going to be doing a lot of my blogging over there for the next little while, so, please subscribe there too.

Introducing Action Messager: Dead simple IM notifications for your app!

Apr 07, 2008

Sometimes email is the wrong choice for webapp notifications. Our inboxes are becoming increasingly cluttered, and especially for those of us who carry PDAs, it can be a pain to scroll through twelve facebook notifications just to get to the mail that we actually need to read. What's more, email just isn't that great of a tool for receiving short messages, since you have to open them each individually. At the very least, it's nice to have another choice.

Particularly with shorter notifications, instant messaging can be a great alternative to email. The messages don't clutter up your inbox, or need to be opened individually. Most people already use it on a daily basis. The only problem is implementation.


ActionMessager is a simple framework for creating IM-based notifiers. It is structured like ActionMailer, so it's got virtually zero learning curve for most rails developers. What's more, because the syntax is the same, it's pretty easy to create a delegate class that acts like your mailer, but sends IMs as well. That means drop-in replacement!

All you have to do start sending IM notifications to your users is subclass ActionMessager::Base. Then, create a method that sets an array of recipients, and returns the message you'd like to send:

class JabberNotifier < ActionMessager::Base
  def friendship_request(friendship_request)
    @recipients = friendship_request.receiver.jabber_contact
    "You have received a friendship request from #{}! Click here to accept or decline: #{friendship_request.url}"

Then, wherever you'd like to send the notification:


That's all there is to it.


Currently, only jabber is supported. It is possible to access other IM services over jabber, but I'm not 100% clear how it works, and I don't yet have need for it, so that may or may not come later.

The bigger caveat, though, is speed. Sending a notification takes anywhere from 1-2s (whether you send 1 or 5). It may be possible to improve performance by using a different jabber library, and I'll probably investigate that in the near future. Even with a performance boost, though, you should still take your notifications out of the request cycle, by using something like workling to process them asynchronously.

Get It!

Get action_messager in gem form:

$ sudo gem install action_messager

Or get the source, from github:

$ git clone git://

attribute_fu and jQuery shake hands

Apr 03, 2008

This blog has been quiet of late, because I'm working on a couple of exciting, but still top-secret projects. Anyway, I recently moved one of those projects over to jQuery, because of its speed, syntax, and general awesomeness. Tonight, when I went to create a multi-model form with attribute_fu, I was stopped dead in my tracks by its heavy dependency on prototype. A few minutes of hacking later, a_f and jQuery are playing rather nicely together.

Get it from the jquery branch of a_f's git repo. If you aren't using git yet, this might be just the excuse you need to check it out! Or, download a tarball from github. After you install the plugin, you'll have to copy its one javascript dependency (jQuery templates) from the javascripts directory over in to your public/javascripts and require it in your layout.

Plugins I've Known and Loved #3: Ultrasphinx

Mar 03, 2008

If you've ever implemented searching in your rails app, you probably noticed that it's a major hassle (unless, of course, you already know about Ultrasphinx). Ferret is probably the most commonly used indexing solution, but, it isn't anywhere near production ready. acts_as_solr, another option, is so full of show stopper bugs that it really isn't even worth bothering with. The worst part about both of those solutions is that they usually work great in your development environment. Try to put them in to production, though, and you're in for big problems.

Enter Ultrasphinx, by Evan Weaver. Sphinx is a super high performance search daemon, designed to suck information out of a mysql or pgsql database (though, it isn't limited to that). Normally, one would configure Sphinx with SQL queries that it uses to fetch the data. Apparently, the configuration file can be a major hassle to work with — I wouldn't know. Ultrasphinx provides you with a declarative API that it ultimately uses to generate the Sphinx configuration file for you, making the process quick and painless. A few lines of code, a couple of rake commands, and you're searching.

The Sphinx approach has several advantages over ferret, and acts_as_solr. First of all, Sphinx is extremely stable. They're nearing a 1.0 release, and its stability certainly merits that. You don't even really need to worry about monitoring the search daemon, because it just isn't going to crash. Also, if you're running any rails apps in production, you know that long running ActiveRecord callbacks can lead to your app performing very poorly. The ferret and solr solutions both rely on active record callbacks to inform the indexer of new data. Moreover, if the search daemon goes down for any period of time, or, say, the index becomes corrupted (ferret, I'm looking at you), your entire app is going to be down for the count. You set the Sphinx indexer to run every half hour on a cron job, and since it works with the db directly, its performance and stability characteristics have absolutely zero impact on your app.

Trying it Out

So, let's implement a search for our blog. First, we'll want to edit the paths to the index and logs in the default config file. Note that Ultrasphinx uses two types of config files: one that the programmer edits, and one that it generates; both are necessary on all machines that are accessing the index (seriously, not having the generated config on a slave machine caused me some trouble). Then, we'll want to declare our post model as indexed (note that you must declare the fields as strings, not symbols):

class Post < ActiveRecord::Base
  is_indexed :fields => ['title', 'body']

Then, we'll need to ask Ultrasphinx to generate a configuration file. You've got to re-run this rake task any time you make changes to the definition of your models' is_indexed declaration. Make sure to .gitignore (svn:ignore for people still stuck in svn land) the generated file in development. Evan recommends checking the production file in to version control, but I have it set to generate automatically on deploy with a cap recipe. That way, if I make changes to the indexing, I won't forget to re-generate the config file. All you have to do is run:

$ rake ultrasphinx:configure

Then, since it's our first time, we've got to run the indexer:

$ rake ultrasphinx:index

Finally, we'll need to start the search daemon:

$ rake ultrasphinx:daemon:start

Now, we can start searching.

@search = => params[:query])

So, it's really easy to build a basic search engine using Ultrasphinx. There are some gotchas, though.

Gotchas & Notes

  • More complex indexing with Ultrasphinx can be slightly more verbose, and SQL-focused than it would be with a solution that relies on AR callbacks to do its indexing.
  • Transforming data with Ruby is impossible; the data you index must be in your database (unless you use a stored procedure, which can actually be written in ruby if you're using pgsql, but I digress).
  • Ultrasphinx preloads your indexed models when it is initialized. So, if your models depend on any monkey patches that live in your app's lib directory, they must be loaded before the Ultrasphinx (in my experience, this has meant pluginizing my monkey patches). Because of the way that exceptions are caught in the preloading routine, you won't see the actual error that is stopping your model from loading. Instead, you'll just get a constant loading error, or a name error, or something. If you see something like that, and your models load fine without Ultrasphinx installed, look for dependency issues.
  • Ultrasphinx attempts to preload your indexed models using a regex that doesn't respect commenting (at the time of writing). If you're struggling with issues mentioned in the last gotcha, you'll probably try commenting out the is_indexed call to see whether that's what's causing the problem. That won't work. You can either delete the is_indexed call entirely when debugging, or pull from my git repo, where I've modified the regex to respect #-style commenting (but not the =begin/=end style).
  • If you see DivideByZeroErrors in production, it's probably because you're missing the generated configuration file on one or more of your app server machines.

Check it Out

You'll need to grab Sphinx. Then, the plugin...

Get it from svn:

$ svn export svn:// vendor/plugins/ultrasphinx

Or pull from my git repo (for the change described in the gotchas section):

git clone git://

To Sum Up

Ultrasphinx is by far the most effective rails searching solution I've come across. Unlike most of the other options, the search daemon is incredibly stable, and the index never seems to become corrupted (I'm running it in a relatively high load production environment with absolutely zero trouble so far). Also, since Ultrasphinx doesn't rely on AR callbacks for indexing, your application isn't quite as coupled to your search daemon; if it dies, search functionality will break, but the rest of your app will still function. It's not without problems, and complex indexing can be trickier, but Ultrasphinx's stability and performance make the choice a no-brainer.

Plugins I've Known and Loved #2: has_finder

Feb 25, 2008

Plugins I've Known and Loved started out as a would-be series of presentations at Montreal on Rails. I've been bad about getting to the second presentation, so I thought I'd continue PLIKaL here. No, I'm not going to talk about any of my plugins. Instead, over the next week or two, I'd like to tell you about some wonderful plugins that I only recently discovered (though, none of them are particularly new). Today — a plugin that belongs in everybody's toolbox: has_finder.

The basic idea of this school of plugins is creating reusable scopes for your models. For instance, if you had a blogging application with a post model, you might want to query for published posts. To that end, you might write something like this in your controller:

Post.find(:all, :conditions => {:published => true})

Hopefully, though, you're familiar with the skinny controller, fat model best practice, so you'd write it like this:

class Post < ActiveRecord::Base
  def self.published
    find(:all, :conditions => {:published => true})

That's a lot better, but far from perfect. If you were to write several such methods, for example, you would not be able to combine them easily. So, looking for all published posts written by James would require writing a second method; not very DRY. Of course, ideally, you'd use an association proxy to accomplish that goal anyhow. That would work with our hand-written finder, but we'd lose all the benefits of the association_proxy like nested finds (...published.find(:all, :order => 'created_at DESC')), and we certainly wouldn't be able to chain two of them together (...published.order_by_recent). has_finder solves those problems, and more.

I Can Has Finder?

In order to reproduce the finder we wrote earlier, you'd write the following:

has_finder :published, :conditions => {:published => true}

Should you want to find all published blog posts by James, you could then make use of association proxy:

User.find_by_name('James').published.find(:all, :order => 'created_at DESC')

I have come across a good number of associations used for similar purposes, but defined on the associated model. For instance (omitting irrelevant details):

has_many :published_posts, :conditions => {:published => true}

Having discovered the wonders of has_finder, I now consider this to be an anti-pattern. The has_finder method is definitely DRYer. Far more importantly, however, the definition of what makes a post published (in our case, :published => true) belongs in the post model. The same rule applies to ordering. Following this best practice ensures there's only one point of change for refactoring, and that your models and their tests tell a more complete story about the data they represent.

Finally, has_finder supports parameterization of finders. You can wrap your conditions hash in a lambda that accepts an arbitrary number of arguments. Continuing with our blog post example, you might wish to query your posts table for all posts this week, this month, or this year, depending on the situation. To keep DRY, you could define your finder as follows:

has_finder :since, lambda { |date| {:conditions => ['created_at > ?', date]} }

Using it like this:


To put it all together, let's query for all James's published posts in the last week:


It reads just like a sentence!

Oh yeah, and all of this is compatible out of the box with will_paginate. I heard a rumor that it's going to be sucked down in to rails, too. Really, how could you not check it out (get it here)?

The Crunch Mode Paradox: Turning Superstars Average

Feb 16, 2008

We've all been there — a product near (or not so near) ready to launch. Bosses breathing down your neck. Deadline soon approaching. People start to work evenings, then weekends. It starts to feel, perhaps rightly so, like nobody is doing anything other than working. Crunch mode is the time when everybody buckles down, and focuses on the product, and only the product. Forget about everything else.

Crunch mode is a result of the belief that more hours spent coding means more software gets written. The obvious problem with management's formula is that a tired programmer can do more harm than good. A particularly nasty bug, for instance, can cost a team orders of magnitude longer to fix than it took to write. So, while it may be possible to increase the total number of lines of code written, it only seems reasonable to measure shippable lines of code as a metric for productivity. Nasty bugs that get written by tired, over-worked developers should be counted against total output, rather than for it. In crunch mode, managers tend to overlook the bugs, and see productivity as increasing, while it's really decreasing. That's the crunch mode paradox.

Why Does it Happen?

You've assembled a team of all-star programmers. They're going to practice test-driven development, continuous refactoring, and other agile processes. They are code artists. Indeed, your team of developers is obsessed with 'beautiful code', and whether it's artistry, or OCD, they're not happy until code is perfect. They write shippable code.

The key to understanding why your team of carefully selected programmers breaks down in crunch mode is understanding what makes them tick in normal mode. Writing shippable code means being in a constant state of careful thought. Each change to the code is a calculated maneuver, the programmer's brain always working hard to discover new patterns of duplication that may have emerged in the code base, and promptly abstract them. The process of continuously searching for a better way — for better abstraction — is a major key to writing high quality code. As with anything else, abstraction can become a pitfall, but used properly, it means writing less code, easier testing, and perhaps most importantly, fewer places for bugs to come from. Of course, continuous refactoring, and code reuse aren't the only pieces of the puzzle, but it's important to note the thought process of the developer who is crafting his code this way.

In normal mode, your superstar developer cares about writing beautiful code; that's all. That's why he makes sure to write a comprehensive test suite, to continuously refactor, and makes sure his code is readable. That is probably the biggest difference between your great developer, and the average developer you went to so much trouble to avoid hiring; your superstar's priority is the code, whereas the average programmer's priority is going home at the end of the day. When you flip that crunch mode switch, though, priorities change.

When you tell your team that there's a deadline for a certain feature set (especially when it's a tight one), the focus is no longer on the code. When everybody is racing to accomplish a set of tasks for a certain date, coming in to work every day is about trying to get as many of those features out the door as possible in as short a time. The pressure is on. People start cutting corners. Processes break down. People write fewer tests, refactor less frequently. You have effectively turned your superstar team in to a group of average programmers.

In factory terms, a worker's production rate decreases over time. A worker who is creating 10 widgets/hour at the beginning of a shift may be producing only 6/hour at the end of the shift, having peaked at 12/hour a couple of hours in. Over time, the worker works more slowly, and makes more mistakes. This combination of slowdown and errors eventually reaches a point of zero productivity, where it takes a very long time to produce each widget, and every last one is somehow spoiled. Assembly-line managers figured out long ago that when this level of fatigue is reached, the stage is set for spectacular failure-events leading to large and costly losses – an expensive machine is damaged, inventory is destroyed, or a worker is seriously injured. (Why Crunch Mode Doesn't Work)

It Doesn't Have to Be This Way

There are two problems with crunch mode: management, and developer. Being a developer, I'm going to go after the pointy haired among us first.


Crunch mode doesn't work. Sending your team on a death march quickly leads to programmer fatigue, which nearly always leads to bad code. Moreover, developers are likely to become demoralized, burnt out, lose interest in the product, and perhaps worst of all for you, they'll become resentful of management. Moreover, in many cases, due to nasty bugs, and later hits in maintainability and consequent necessary rewrites, the crunch mode developer is outputting less in total. Finally, this sort of crunch mode is nearly always symptomatic of a team that has slipped in to waterfall development.

The great thing about true agile development is that after the first couple of weeks, the code is always in roughly shippable state. So, if you are working on a product that has to launch on a particular date, for whatever reason (be it PR, or anything else), agile is a highly effective way to ensure that you're going to have a shippable product ready. The reality is that trying to shove a specific featureset down the throats of your development team with a hard deadline simply does not work. The software will be alpha-quality if you're lucky.

John Nack, Adobe Photoshop product manager says of their transition to agile:

The team's ability to deliver a public beta--a first for a major Adobe application*--is probably the single greatest tribute to the new method. In the old days, we could have delivered a feature-rich beta, but the quality would have been shaky at best. This time, as Russell says, "The public beta was basically just 'whatever build is ready on date X,'" because we knew it would be stable. (Agile development comes to Photoshop)


The key to beating crunch mode, in my experience, is forgetting about the pressure being put on you (not easy, I know); you have to work as normal. What happens to us during crunch mode is extremely counter-productive. For some reason, when we want to work faster, we stop doing all of the things that save us time. Testing, refactoring, and general caring for code is what separates ultra-productive developers from average ones. But, best practices the first thing to go when the pressure's on.

Once you've pulled yourself together, and started working normally again, the next step is to start saying no. Obviously, this piece of advice is going to be controversial, but I think it's a discussion worth continuing. As Raganwald says:

Try this: Employ an Engineer. Ask her to slap together a bridge. The answer will be no. You cannot badger her with talk of how since You Hold The Gold, You Make The Rules. You cannot cajole her with talk of how the department needs to make its numbers, and that means getting construction done by the end of the quarter. (What I admire about professions like Engineering and Medicine)

Obviously, nobody is going to die on account of (most) bad software (as they might with a poorly built bridge). But, when you know that you, and others on your team are doing bad work, it's time to speak up. If you don't, you're doing a disservice to yourself, and your company. You're the one who is going to have to maintain this nightmare when crunch mode is over (and we know that there's never really time for a cleanup). Your company is going to have to deal with low quality software, and likely a botched release. Everybody loses.

The Crunch Mode Paradox

Crunch mode is a paradox. Managers think they're fostering increased productivity, but instead, they're damaging morale, the product, and often causing a net decrease in output. Developers try to increase their productivity by cutting all of the corners that make them effective in the first place. For those reasons, the death march has the opposite of its intended effect. Agile lifecycle management seems to be the most effective tool for guaranteeing that shippable software will be ready on a particular date. So, for companies trying to develop software, next time, try agile instead of crunch mode. Everybody will be happier.

resource_controller 0.2: maintenance release - no more edge_compatible branch

Feb 15, 2008

This is mainly a maintenance release.

  • The helpers have been broken out in to four separate files internally, to help with managing the deep nesting branch.
  • There have also been a few little refactorings in preparation for some new features to come shortly.
  • The biggest thing to note for users is that there is no longer an edge_compatible branch. Since rails 1.2.6 generates the same style of named routes as 2.0.2 (edit_tag_photo_path instead of tag_edit_photo_path), there is no longer a need to continue two separate streams of development (yay)!.
  • The generator has been updated to spit out the right filenames for templates (rhtml/haml vs html.erb/html.haml), and old-style migrations (t.column instead of t.string) for any users still stuck on 1.2.6, so the transition shouldn't be a problem.

Get It!

You can get the new version by exporting from svn:

svn export vendor/plugins/resource_controller

Or, if you're using piston, you may need to switch to the new url if you were previously on edge compatible (this is untested, so it may be slightly wrong).

  piston switch
  piston update

As always, everybody is encouraged to come join the discussion on the increasingly lively mailing list.

An Introduction to Association Proxy

Feb 10, 2008
@post.comments << => 'something')
Comment.find(:all,  :conditions => ['post_id = ? AND title like ?',, 'something'])
Comment.count(:all, :conditions => ['post_id = ?',])

There is an easier way to do all of that. So, if you're still manipulating ActiveRecord associations by hand, stop; there is a better way.


When you request a collection of associated models, ActiveRecord extends that array with several helpful methods. You can treat that array like a model class, except all of the methods are automatically scoped to the association's parent. It responds to methods like find, create, new, count, etc, and, it's called AssociationProxy.

So, what if we wanted to create a comment, scoped to our post model?


Making use of AssociationProxy, the code reads better, and requires fewer keystrokes. We can also use AssociationProxy to find elements in our collection.

@post.comments.find(:all, :conditions => {:title => 'title we are looking for'})

We can even use dynamic, attribute-based finders through AssociationProxy.

@post.comments.find_by_title 'title we are looking for'

It Gets Better

Since assoc proxy responds to many of the same methods as a model class, the two can be interchanged for many operations. In resource_controller, for example, nested controllers depend heavily on association proxies. Let me show you what I mean.

If we had, for example, a typical show action.

def show
  @comment = Comment.find(params[:id])

With very little effort, we can allow comments to be shown nested, or unnested.

def show
  @comment = model.find(params[:id])

  def model
    params[:post_id] ? Post.find(params[:post_id]).comments : Comment

Since we know that AssociationProxy responds to other model class methods, we can base our create method on this technique, too.

def show
  @comment = model.find(params[:id])

def create
  @comment =[:comment])
    redirect_to @comment
    render :action => 'new'

  def model
    params[:post_id] ? Post.find(params[:post_id]).comments : Comment

In fact, this is nearly exactly how resource_controller is built.

So, association proxy is useful in simple and complex situations alike. It can save you a lot of code, and increase readability. It's a win-win really.

Look out Ryan Bates - There's a New Screencast in Town

Jan 27, 2008

So, I'm sitting there on Friday evening working on a screencast introduction to attribute_fu, when Fabio Akita sends me an email to tell me he just created one for resource_controller! It was like he'd read my mind.

Fabio was nice enough to agree to let me put some title screens on his r_c screencast, but I couldn't seem to get it to export while still looking decent. So, here's the first episode of GiraffeCasts (for those of you who don't know, my company is called GiraffeSoft), and Fabio's awesome resource_controller introduction.

Episode 1: attribute_fu

Episode one is a quick walk through the basics of attribute_fu. Get it here.

As promised, here is the code from the screencast:

## _task.html.erb
<p class="task">
  <%= f.text_field :title %>
  <%= f.remove_link('Remove') %>
## _form.html.erb
<div id="tasks">
  <%= f.render_associated_form(@project.tasks, :new => 3) %>
<%= f.add_associated_link('Add Task', %>
class Project < ActiveRecord::Base
  has_many :tasks, :attributes => true
    before_save :remove_blank_tasks
    def remove_blank_tasks
      tasks.delete { |task| task.title.blank? }

To get attribute_fu:

$ piston import vendor/plugins/attribute_fu

Fabio's resource_controller Screencast

Fabio Akita gives an excellent tour of resource_controller, in screencast form. Get it here.

See Ya Next Time

That's all for today. Check back for more GiraffeCasts!

Ruby Reddit

Jan 25, 2008

When reddit announced that they were going to be offering users the ability to create their own reddits, I sent them an email asking for a beta account. Guess what? They gave me one!

So, rubyists everywhere, here it is: our very own ruby reddit!

Why Distributed Version Control Matters to You, Today

Jan 20, 2008

If Linus's talk about git made you feel like a moron, rest assured you're not alone. Distributed version control is one of the most poorly explained topics in software, today. There are plenty of people saying that you should use it, but nobody has done a great job of explaining why. Here's my take.

WTF is Linus talking about?

First, let's talk about exactly what distributed version control is. The most common approach to describing a decentralized system is to present the reader with an image like this one, taken from the mercurial page called UnderstandingMercurial:

If that image makes any sense to you (and you're new to distributed version control), congratulations. The rest of us need a clearer description.

The best place to start is with the differences between centralized, and decentralized version control. With a centralized system like Subversion or CVS, there is a single copy of the repository; it typically resides on a server somewhere. When a developer works with the code, they receive something called a "working copy" from the central repository. The working copy contains enough information to interact with the central server, but does not contain the revision history for the project, nor the branches or tags (though, the branches and tags can often be explicitly requested).

With a decentralized system, the opposite is true. Instead of checking out a working copy, the developer works with the entire repository, including the entire revision history of the project and all the branches and tags. The copy that the developer receives is identical to the repository they fetched it from. Commits, branches, and tags occur locally, on the copy of the repository that the developer made. Changes can then be pushed and pulled to or from a public repository somewhere, another developer's repository, or anywhere else that a repository exists.


Usually, when you want to get a build or source distribution of open source code, you head the project's main website. The project's repository is closely guarded, with commit rights only awarded to a select few. Anybody wishing to make changes to the source must first check out a working copy from version control, submit a patch, and hope that it is accepted.

When every developer has a full copy of the repository on their machine, the hierarchy of open source projects is all but eliminated. Any developer who wishes to work on the source code can clone the repository, commit as much code as they want to it, receive the changes from any other developer's cloned repository, and publish their work for other developers to use, or pull back into their own repositories. In a decentralized system, it doesn't matter who has the "...keys to the source repository..." (it actually says that on the rails core team page — take a look for yourself). If the original author continues to maintain the best version of the code, great; if not, users of that code can begin to pull from whoever does have the best version.


Entirely theoretical software articles are lousy — so, I always try to provide examples out of real software; this article is no exception.

Many (maybe most) rails plugins are inactive. They were created to scratch an itch, published, kept up to date for a few months, and then left with no maintainer. Since rails plugins largely reside in subversion repositories, nobody can continue development without losing the entire revision history of the project, and going to the trouble of setting up a public svn server.

Markaby is no exception to that rule. When I tried to use it in a recent project, I found that it was incompatible with rails 2.0.2. According to markaby's subversion logs, the last change was November 24th, 2007, a few weeks before that version of rails was released. Luckily, I was able to find a ticket in rails trac with instructions on how to hack a fix in to the plugin — a solution that worked great for me, but wouldn't work for a user who didn't know their way around plugins. Without commit access, normally I wouldn't be able to offer my fix publicly.

Not so with a distributed system! I was able to use git-svn to pull Markaby into a git repository. I've published my changes, and now anybody can grab a working version of Markaby by typing the following in to a command prompt:

  $ git clone

Even cooler than that, somebody with commit access can grab the changes, and push them back into subversion, including commit messages, and everything! Anybody who clones the git repository can still pull changes from _why's subversion repository if it ever becomes active again. If not, development can continue anywhere, and be done by anybody. That's the beauty of decentralization.

Taking Style Tips from Natural Language

Jan 13, 2008

When one speaks of the readability of a computer program, they refer to the ease with which the source code is read by a human. More recently, with languages like Ruby and Python, one often hears praise for sections of code that seem to read like natural language. That sort of code is easier read, but seems to be more challenging to write. If english-like is what we're after, perhaps style manuals can teach us something about writing code.

Having always been interested in writing English, I've read several books on style. The text that I continue to return to is Strunk & White's The Elements of Style. It has served me well as a reference for writing English; let's see how it does for Ruby.

Lesson One: Put statements in positive form.

The rule from Strunk & White:

Make definite assertions. Avoid tame, colorless, hesitating, noncommittal language. Use the word not as a means of denial or in antithesis, never as a means of evasion.

Elements of Style provides a short explanation of each rule, followed by several illustrative examples. Each example contains a violating passage, and a correction. Here's an example from this rule...

The violating example:

He was not very often on time. corrected by:

He usually came late.

Yes, Strunk and White provide refactorings; I'll do the same.

if !
  render :action => 'create'
  redirect_to post_url(@post)

That example is rather simple. It says: If the post doesn't save, render the create action, otherwise, redirect to the post's url. I think we can agree that in most cases, it would be better to express that as follows:

  redirect_to post_url(@post)
  render :action => 'create'

By putting our statements in positive form, we have refactored that code to read: If the post saves, redirect to its url, otherwise render the create action. Because we have eliminated the negation, the code reads more smoothly and easily. While that is an obvious example, it shows that there is something to this. Let's look at another rule.

Lesson Two: Use the active voice.

The rule from Strunk & White:

The active voice is usually more direct and vigorous than the passive.

The violating example:

It was not very long before she was very sorry that she had said what she had.

The correction:

She soon repented her words.

This rule doesn't apply as directly, but I have always loved it as a style guideline for code. Take this rather common example:

[1, 2, 3].each do |number|
  if number.even?
    # do something with an even number

That example says: With each element in this array, if that element is even, do something with the even element. Collecting the desired elements that way reads passively. Making use of some more of our tools from ruby's Enumerable module, we can refactor it to use the active voice.

[1, 2, 3].select(&:even?).each do |number|
  # do something with an even number

The refactoring says: With all of the even numbers in this array, do something. It reads shorter, and expresses the intent of the coder more directly.

From the Real World

The next example we're going to look at comes from ActionController::Base (a central class in rails). I have selected a few illustrative branches from a conditional that spans some sixty lines, in a method that spans nearly one hundred. It is the rendering logic:

if file = options[:file]
  render_for_file(file, options[:status], options[:use_full_path], options[:locals] || {})

elsif template = options[:template]
  render_for_file(template, options[:status], true)

elsif inline = options[:inline]
  render_for_text(@template.render_template(options[:type], inline, nil, options[:locals] || {}), options[:status])

elsif action_name = options[:action]
  template = default_template_name(action_name.to_s)
  if options[:layout] && !template_exempt_from_layout?(template)
    render_with_a_layout(:file => template, :status => options[:status], :use_full_path => true, :layout => true)              
    render_with_no_layout(:file => template, :status => options[:status], :use_full_path => true)

It says (bear with me): If the file option is present, render the file, or if the template option is present, render the template, or if the inline options is present, pass the instance variables to the template, and render some inline text from the controller, or if the action option is present, render that action. Note: that transliteration has been shortened somewhat.

The passiveness of a multi-branch conditional damages clarity; it creates a meandering experience for the reader. The problem is that separate concerns are being forced together. In the case of the render method, the conditional determines which type of rendering to perform, and contains the actual code to perform that type of rendering.

Long conditionals also seem to resemble run-on sentences. Since one must remember all of the context as they read through the branches of this sort of structure, it quickly becomes difficult to follow. It would be better to separate the decision from the specific rendering logic; that will make it easier to be more direct.

def render
  render_types   = [:file, :template, :inline, :action]
  type_to_render = render_types.detect { |render_type| options[render_type] }

def render_action
  template = default_template_name(action_name.to_s)
  if options[:layout] && !template_exempt_from_layout?(template)
    render_with_a_layout(:file => template, :status => options[:status], :use_full_path => true, :layout => true)              
    render_with_no_layout(:file => template, :status => options[:status], :use_full_path => true)

def render_inline
  render_for_text(@template.render_template(options[:type], inline, nil, options[:locals] || {}), options[:status])

def render_template
  render_for_file(template, options[:status], true)

def render_file
  render_for_file(file, options[:status], options[:use_full_path], options[:locals] || {})

Now, it reads: The render types are file, template, inline, or action. The type to render is the one that is present in the options hash. Call the method named: render underscore the type to render. Refactored, each of the ways of rendering gets its own method, and those methods have a name. That way, it is clear what each of those blocks of code does, and what we're doing when we call it; we needn't decipher a conditional to find our way to the pertinent logic. Separating code in to small chunks housed by well named methods allows us to express our intentions directly.

As an aside, writing methods this way is far more testable. In its original form, the render method does many different things, depending on the parameters it receives. Testing it would be an exercise in managing side effects. You'd also be testing two different things at once: does it reach the correct branch of the conditional, and does it execute the rendering properly. With the simple refactoring above, you can test the rendering logic separately from the selection of which type of rendering to perform. Dividing methods in to smaller pieces nearly always allows you to isolate functionality more effectively.

And There's More...

I've only scratched the surface in this article. There is a lot more to be learned from natural language writing style. I highly recommend picking up a copy of The Elements of Style. Worst comes to worst, you'll greatly improve your writing of English; best case scenario, you'll pick up a lot of insightful tips on writing clearer and more readable code.

A Response to Dreamhost

Jan 12, 2008

Dreamhost sure stirred things up with their article rant about rails deployment and performance issues. Their complaint is roughly that they are having to work too hard to support rails; it needs performance and deployment improvements. Dreamhost voiced all of these complaints on the company's blog.

As they put it...

What I do have personal knowledge of is how difficult it can be to get a Rails application up and running and to keep it running. DreamHost has over 10 years of experience running applications in most of the most popular web programming frameworks and Rails has and continues to be one of the most frustrating.
...the solution from the Rails community itself was quite honestly, stupid.
That suggestion shows either a complete lack of understanding of how web hosting works, or an utter disregard for the real world.
Ruby on Rails needs to be a helluva lot faster.
Ruby on Rails needs to more or less work in ANY environment.

Forgive me for chopping up their arguments. If you have the time, please read the article. These quotes don't quite do it justice; they serve merely to provide the tone of the piece.

Frankly, I think DHH responded well: rails core team is there to serve their own purposes. Ruby on Rails doesn't need to do anything, despite what a Dreamhost blogger might suggest. The people hacking on rails core don't target platforms like Dreamhost's. So, in fact, rails is not designed to run on shared hosting.

It's not just shared hosting, either. Dreamhost is trying to support rails on massively oversold servers. Configuring rails in a shared environment (particularly under apache) is difficult; configuring, and maintaining their servers to support rails probably costs Dreamhost a lot of money. Rails doesn't perform well in that environment, which probably costs them even more money in support calls from frustrated customers. So, their profit margins on rails service are likely narrowing.

Why is that the rails community's problem to solve? Dreamhost has a business challenge. Rails deployment isn't perfect — it has more than its fair share of problems; but, there are plenty of rails apps running just fine in production. This blog, a rails app, runs on a 20$ per month plan from slicehost. The identical app barely ran at Dreamhost. It's anecdotal evidence, sure, but the point is that rails deployment (at least on this scale) is far from impossible. Dreamhost just can't seem to squeeze it in to their oversold offerings. And, they want somebody else to fix the problem for them.

There are plenty of shared hosting offerings available that support rails nicely. Those companies don't seem to be having any trouble creating an environment that works. None of them have published angry articles pointing the finger at "the rails community". But, I digress.

If it were an individual complaining — a noobie who was having a tough time deploying his rails app on a server he could afford — then perhaps there would be a reason to look at this differently. But, Dreamhost is a corporation; they are simply "...looking to capitalize on a framework that's driving a lot of demand...", to borrow some words from DHH.

If this had been a PC manufacturer, complaining loudly (and rudely) that linux doesn't run on cheap enough hardware for them to sell PCs for 50$/each... If Linksys had posted a rant on their company blog, complaining that linux needed to be faster, because their routers weren't performing well enough... If IBM had posted a rant on their company blog, complaining that the linux community showed "...either a complete lack of understanding of how web hosting works, or an utter disregard for the real world."... Everybody would have said the same thing: so, fix it.

Plenty of businesses are capitalizing on open source technologies. But, part of the deal in the real world is that when something doesn't work right, you might have to fix it yourself; but you can, and that's part of what's so great about open source. Engine Yard just hired the entire rubinius team (and some people to hack merb, so I hear). Linux kernel developments are facilitated in large part by corporate sponsorship, bounties, etc. The system works, because companies can solve their own problems, while leveraging the work of countless others. That's how it works in the real world.

An Introduction to Ruby's Enumerable Module

Jan 05, 2008

If you're still writing for loops, stop; there's a better way. Exactly why merits some examples.

[1, 2, 3].each { |number| puts number }

That's a pretty straightforward one. It says: with each element in this array, execute the given block. But, what if we wanted to do something more complicated, like determine whether there were any even numbers in the array?

[1, 2, 3].any? { |number| number % 2 == 0 } # => true
[1, 3, 5].any? { |number| number % 2 == 0 } # => false

The any? method says: are there any elements in the array for which this block evaluates to true? We might also want to know whether all? of the elements were even.

[2, 4, 6].all? { |number| number % 2 == 0 } # => true
[1, 2, 3].all? { |number| number % 2 == 0 } # => false

We can see that making good use of Enumerable methods produces highly readable code. Before we move on, there are a couple more methods that are worth demonstrating.

We've already seen how we can easily look inside of an array, using any?, and all?. Conveniently, Enumerable provides similar methods for extracting objects. What if we wanted, for example, to retrieve all of the even numbers.

[1, 2, 3, 4].select { |number| number % 2 == 0 } # => [2, 4]

...or, the first even number...

[1, 2, 3, 4].detect { |number| number % 2 == 0 } # => 2

Let's pick up the pace.

Putting the Pieces Together

If I was writing some blogging software, I might want to display recent activity in the administration area. With a big array of notification objects, I can use select to easily divide them up for display, based on recency.

today     = { |notification| notification.created_at > }
yesterday = { |notification| notification.created_at < && notification.created_at > 2.days.ago.to_date }
earlier   = { |notification| notification.created_at < 2.days.ago.to_date }

I'm actually doing something very similar to this in an app I'm working on. Since I love fat models, I created some helper methods on AR::Base. They look (something) like this:

class ActiveRecord::Base
  def today?
    created_at >
  def yesterday?
    created_at < && created_at > 2.days.ago.to_date
  def earlier?
    created_at < 2.days.ago.to_date

So, now, we can greatly simplify our use of the select method, making it far clearer what we're up to.

today     =
yesterday =
earlier   =

...wait, what? The ampersand operator, as described by the PickAxe book:

If the last argument to a method is preceded by an ampersand, Ruby assumes that it is a Proc object. It removes it from the parameter list, converts the Proc object into a block, and associates it with the method

But, a symbol isn't a Proc object. Luckily, ruby calls to_proc on the object to the right of the ampersand, in order to try and convert it to one. Symbol#to_proc is defined in ActiveSupport; it creates a block that calls the method named by the symbol on the object passed to it. So, for the above examples, the blocks send today?, yesterday?, or earlier?, respectively, to the each object that is yielded. Written out by hand, the expanded blocks would look like this:

today     = { |notification|  }
yesterday = { |notification| notification.yesterday? }
earlier   = { |notification| notification.earlier? }

For additional to_proc coolness, it's worth mentioning Reg Braithwaite's String#to_proc (or get the code). I won't go on any further about that today, though.

Let's look at one last example — this time from real software. In order to add support for polymorphic, deep nesting in resource_controller, I had to write an algorithm that would determine which of the possible sets of parent resources were present at the invocation of a controller action. This is done by testing to see which of the params (parent_type_id) are present. So, if I say belongs_to [:project, :message], [:project, :ticket] in my comments controller, I need to determine: for which of those two possible sets are both necessary params present? Since belongs_to is just a simple accessor, the algorithm is easy to write.

belongs_to.detect { |set| set.all? { |parent_type| params["#{parent_type}_id".to_sym].nil? } }


Enumerable's biggest advantage over the for loop, as we've seen, is clarity. Using Enumerable's looping power tools, like any?, all?, and friends makes your code read semantically. Especially combined with Symbol#to_proc, code "...seems to describe what I want done instead of how I want the computer to do it.", to quote Raganwald.

Finally, in order to take your Enumerable usage to the next level, there are a few more methods you should have in your toolbox.

I'd also recommend keeping the Enumerable, and Array documentation bookmarked, so that you can quickly refer to them. As a bonus, most of these methods exist in prototype.js, too; just take a look at the documentation.

Tuesday Trip to Toronto

Dec 17, 2007

I'm headed to Toronto for the next couple of weeks — leaving Tuesday as you might have guessed from the title. If any Torontonians want to meet up for a coffee (yes, I am a coffeegeek), a beer (but, really, preferably a coffee), or a hack session (preferably including coffee), shoot me an email (my address is listed top-right).

Noisy Backtraces Got You Down?

Dec 01, 2007

Bothered by all the noise in my tests' backtraces, I was thrilled when I first saw a thread on the shoulda mailing list with some discussion around making them a little bit easier on the eyes. Assuming that creating such a filter would be a long and tedious process of monkey-patching test/unit, I forgot about the idea, assuming the job better left for somebody with more time to spare than myself.

When Dan Croak revived the thread with some sample code, cooked up at a Boston.rb hackfest, it occurred to me that the job was far more manageable than I had originally conceived. I quickly fired Dan an email asking whether he'd be interested in a pluginization of their concept. With a resounding yes! from Dan, we set off to create quiet_stacktracebacktrace.

The rest of this post is cross-posted on GIANT ROBOTS

90% of this typical backtrace will not help you hunt and kill bad code:

1) Failure:
test: logged in on get to index should only show projects for the user's account. (ProjectsControllerTest)
[/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/test/unit/assertions.rb:48:in `assert_block'
/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/test/unit/assertions.rb:500:in `_wrap_assertion'
/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/test/unit/assertions.rb:46:in `assert_block'
/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/test/unit/assertions.rb:63:in `assert'
/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/test/unit/assertions.rb:495:in `_wrap_assertion'
/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/test/unit/assertions.rb:61:in `assert'
test/functional/projects_controller_test.rb:31:in `__bind_1196527660_342195'
/Users/james/Documents/railsApps/projects/vendor/plugins/shoulda/lib/shoulda/context.rb:98:in `call'
/Users/james/Documents/railsApps/projects/vendor/plugins/shoulda/lib/shoulda/context.rb:98:in `test: logged in on get to index should only show projects for the user's account. '
/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/test/unit/testcase.rb:78:in `__send__'
/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/test/unit/testcase.rb:78:in `run'
/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/test/unit/testsuite.rb:34:in `run'
/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/test/unit/testsuite.rb:33:in `each'
/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/test/unit/testsuite.rb:33:in `run'
/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/test/unit/testsuite.rb:34:in `run'
/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/test/unit/testsuite.rb:33:in `each'
/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/test/unit/testsuite.rb:33:in `run'
/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/test/unit/ui/testrunnermediator.rb:46:in `run_suite'
/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/test/unit/ui/console/testrunner.rb:67:in `start_mediator'
/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/test/unit/ui/console/testrunner.rb:41:in `start'
/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/test/unit/ui/testrunnerutilities.rb:29:in `run'
/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/test/unit/autorunner.rb:216:in `run'
/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/test/unit/autorunner.rb:12:in `run'
one or more projects shown does not belong to the current user's account.
&lt;false&gt; is not true.

Noisy backtraces must be ruthlessly silenced like political dissidents in Stalinist Russia. This much is clear.

Quiet Backtrace

Install the gem:

  sudo gem install quietbacktrace

Require quietbacktrace:

## test_helper.rb
require 'quietbacktrace'

Run your Test::Unit tests:

1) Failure:
test: logged in on get to index should only show projects for the user's account. (ProjectsControllerTest)
one or more projects shown does not belong to the current user's account.
&lt;false&gt; is not true.

Ooh la la! Now we’re cooking with gas. However, those shoulda-related lines are cluttering an otherwise perfect backtrace. Luckily, Quiet Backtrace is designed to be extended by calling two types of blocks that yield one line of the backtrace at a time.

Silencers and filters

  1. Silencers let you specify conditions that, if true, will remove the line from the backtrace.
  2. Filters let you use Ruby’s succulent set of String methods to modify a line by slicing and stripping and chomping away at anything you deem ugly and unnecessary. (such as the :in `__bind_1196527660_342195’ in the original example)

Say you want to remove Shoulda-related lines… you create a new silencer and add it the Array of backtrace_silencers:

class Test::Unit::TestCase
  self.new_backtrace_silencer :shoulda do |line| 
    line.include? 'vendor/plugins/shoulda'
  self.backtrace_silencers << :shoulda

Re-run your tests and bask in the sweet sounds of silence:

1) Failure:
test: logged in on get to index should only show projects for the user's account. (ProjectsControllerTest)
one or more projects shown does not belong to the current user's account.
&lt;false&gt; is not true.

Exquisitely sparse. Quiet Backtrace clears distractions from the “getting to green” TDD process like a Buddhist monk keeping his mind clear during meditation.

Getting noisy again

On occasion, you’ll want to see the noisy backtrace. Easy:

class Test::Unit::TestCase
  self.quiet_backtrace = false

You can set Test::Unit::TestCase.quiet_backtrace to true or false at any level in your Test::Unit code. Stick it in your test_helper.rb file or get noisy in an individual file or test. More flex than a rubber band.

Using Quiet Bactrace with Rails

After you have installed the gem on your local machine, I recommend using the excellent gemsonrails plugin to freeze it to your vendor/gems directory and automatically add it to your load path.

Install gemsonrails and add it your Rails app if you don’t already have it:

gem install gemsonrails
cd rails-app-folder

Then freeze quietbacktrace:

rake gems:freeze GEM=quietbacktrace

Quiet Backtrace will now work with your tests, but because this gem is meant to work on any Ruby project with Test::Unit, it does turn any Rails-specific silencers or filters on by default. However, there is one of each, ready to be switched on, that remove the most dastardly lines.

Add these lines to your /test/test_helper.rb file to get perfectly clean Rails backtraces:

class Test::Unit::TestCase
  self.backtrace_silencers << :rails_vendor
  self.backtrace_filters   << :rails_root

Ongoing development

Bug reports and patches are welcome at RubyForge. Talk about it on the mailing list. It’s is a safe place. A place where we can feel free sharing our feelings. A nest in a tree of trust and understanding.

Introducing AttributeFu: Multi-Model Forms Made Easy

Dec 01, 2007

Ever needed to create a form that works with two models? If you have, you know that it's a major pain. It really seems like it should be easier, doesn't it? Powerful, SQL-free associations join our models; helpers build our forms in fewer keystrokes than it takes to call somebody a fanboy in a digg comment. But, as soon as you want to put the pieces together — as soon as you seek multi-model form bliss, the whole system falls apart.

Ryan Bates has popularized a few recipes for cooking up just such a dish (we all owe that man a debt of gratitude, don't we?). Great solutions (thanks Ryan!). There are only two flaws to speak of.

  • AJAX-friendliness: What is this - web1? Maybe we should all build our sites in frames, and pepper them with blinking text.
  • It's not a plugin: You mean I have to write code!? I thought rails was going to wash my dishes, walk my dog, and build my websites, while I sat outside on the porch smoking cigars, praying to DHH.

Being the compulsive pluginizer that I am, I simply couldn't resist this great opportunity.


attribute_fu makes multi-model forms so easy you'll be telling all of your friends about it. Unless your friends are serious geeks, though, most of them will probably hang up on you. So, I recommend resisting that urge, unless you're looking for a good bedtime story.

To get started, enable attributes on your association (only has_many for now).

class Project < ActiveRecord::Base
  has_many :tasks, :attributes => true

Your model will then respond to task_attributes, which accepts a hash; the format of the which is a little bit different from what Ryan describes in his tutorials. Here's an example.

@project.task_attributes => { => {:title => "Wash the dishes"},
                     => {:title => "Go grocery shopping"},
                              :new => {
                                "0" => {:title => "Write a blog post about AttributeFu"}

Follow the rules, though, and attribute_fu will shield you from the ugly details.

Form Helpers

The plugin comes with a set of conventions (IM IN UR PROJECT, NAMIN UR PARTIALZ). Follow them, and building associated forms is as easy as it should be. Take note of the partial's name, and the conspicuously absent fields_for call.

## _task.html.erb
<p class="task">
  <label for="task_title">Title:</label><br/>
  <%= f.text_field :title %>

Rendering one or more of the above forms is just one call away.

## _form.html.erb
<div id="tasks">
  <%= f.render_associated_form(@project.tasks) %>

You may want to add a few blank tasks to the bottom of your form — formerly requiring an extra line in your controller.

<%= f.render_associated_form(@project.tasks, :new => 3) %>

This being web2.0, removing elements from the form with Javascript (but your boss calls it AJAX) is a must.

## _task.html.erb
<p class="task">
  <%= f.text_field :title %>
  <%= f.remove_link "remove" %>

Adding elements to the form via Javascript doesn't require any heavy lifting either.

## _form.html.erb
<%= f.add_associated_link('Add Task', %>

It's that easy.

Last but not least (okay, maybe least), if you're one of those people who just has to be different - who absolutely refuses to follow the rules, you can still use attribute_fu (I guess...). See the RDoc for how.

Plugins, Plugins, Get Your Plugins

$ piston import vendor/plugins/attribute_fu

Check out the RDoc for more details.

Come join the discussion on the mailing list.

resource_controller 0.1.6: API Enhancements

Nov 26, 2007

It's been a while since the last r_c release; I have been busy. I'm still busy, so this isn't a huge release, but these changes have been in SVN for a while, so I thought I'd share them.

There was a cool thread over at where we discussed some potential API improvements for r_c (it's still open - come share your ideas!). Here's what made it in to this release...


respond_to now accepts symbols, and has a few extra names. All of these examples have the same effect...

create.response :html do |format|

create.respond_to :html, :js

create.responds_to do |format|


Callbacks Accept Symbols

I've always liked that the filters, and callbacks in rails accept the name of a method or a block. Now, all the callbacks in resource_controller accept them both, too!

create.before { = current_user }

create.before :set_user

  def set_user = current_user

Configuration-Style Syntax for model_name, route_name, and object_name

Now, rather than overriding a method to provide alternate model, object or route names, you can set them through a configuration-style interface (though, you can still override the methods if you prefer).

model_name  :post_tag
route_name  :tag
object_name :tag

Note: resource_name is gone as of this release. If you were overriding that in your controllers, you'll have to use a combination of the other helpers to get the same effect.

LOTS More Helper Methods Exposed

If you've had a problem with r_c helper methods not being exposed in your views, it should be solved now. Just about every possible r_c method is now available in your views!

Get It!

1.2.3+ Compatible

svn export vendor/plugins/resource_controller

Edge/Rails 2.0 Compatible

svn export vendor/plugins/resource_controller

If you're looking for older versions, try browsing

For more info, see the resource_controller section of my blog.

Come join the discussion on the mailing list.

Looking for Rails Work?

Nov 09, 2007

I've posted this in a few places, and gotten some great responses. I don't know why I didn't think of posting it here earlier. (Of course, I give preference to anybody who reads my blog :)). Our little job blurb looks like this:

GiraffeSoft is looking to add an awesome rails developer to our team. If you live and breathe ruby & rails, are test(or spec)-obsessed (this one is absolutely 100% critical), want to work on a team that is actually agile, and get paid to work on our open-source projects (and maybe some of your own), we want to talk to you. We work from home, so, self-motivation, and management skills are critical. Where you are, and when you do the work, however, are not, so long as things get done on time. It will start out as a contract position, with the potential to become full-time down the line. If you're interested, send a short note, resume, and code samples to noticeme at giraffesoft dot ca.

Looking forward to hearing from you.

resource_controller: Redesign My API at

Nov 05, 2007

Now that I've gotten a few maintenance releases out the door, I'm starting to feel like r_c's feature set is pretty solid; I think most people's needs should be covered (still a few things left to include, but we're getting there). The next step in improving resource_controller is evolving the API. What better place to hold the discussion than Marc-André's

First Refactoring designing

In the comments of the last release post, Luigi Montanez brought up an issue that's been bugging me since I first released the plugin. Currently, some customizations (before/after callbacks, response blocks, etc) are made with a "DSL"-style API. Some customizations (like alternate model names), however, are made by overriding helper methods. It's not that I don't like overriding methods, but I find the API a bit inconsistent in that way.

Currently, providing an alternate model name looks like this...

class PostsController < ResourceController::Base
    def model_name

It would be really neat, though, if it was possible to set the model_name the same way you set the flash...

class PostsController < ResourceController::Base
  model_name :blog_post

Please, come join the discussion at RmC, and put some of your ideas on the table for resource_controller's API.

resource_controller 0.1.5: Another Week, Another Release

Nov 01, 2007

Since last week, I've had some really great user feedback from the community. From that feedback, four new features made it in to this week's release. Here they are.

Support for Custom Route Names

Osh let me know that r_c didn't really support non-standard resource names. That is, it wasn't possible to have a controller whose name didn't match its model. Now, not only does r_c support non-standard resource names, but it supports every configuration of non-standard resources imaginable.

Osh's example was the simplest, and probably the most common. He has a resource with a name that doesnt't match the model. In that case, all you have to do is override model_name.

class TagsController < ResourceController::Base
    def model_name

In that example, the variables, and form_builders available in your views would be named photo_tag(s). If you wanted to change that to match the name of your model, you'd override object_name.

class TagsController < ResourceController::Base
    def model_name
    def object_name

Your controller might have some non-standard name, too. If it does, just override route_name.

## routes.rb
map.resources :tags, :controller => "photo_tags"

## photo_tags_controller.rb
class PhotoTagsController < ResourceController::Base
    def route_name

All of those new helpers default to the value of the resource_name helper, which is derived from the name of the controller.

New Urligence Syntax

In order to support non-standard routes in resource_controller, Urligence has gained some new syntax. smart_url infers the name of a resource from an object's class name, so a non-standard route name was impossible. Now, if you provide a 2-element array parameter to smart_url, the first element (a symbol) is used as the resource name, and the second element is the object that is passed to the url helper. It looks like this...

smart_url([:tag, @photo_tag])

All of the old syntax is still, of course, valid, and can be mixed and matched with the new syntax.

hash_for, path, and url

On the topic of Urligence, Hsiu-Fan raised the issue that smart_url would only return paths. Not really an accurate method name, and not convenient if you want a URL, or a hash. Hsiu-Fan was also nice enough to send in a patch, which I refactored a bit, and now we have four methods.


Important: The smart_url method now outputs URLs, not paths. If you are depending on paths coming out of smart_url anywhere in your code, or tests, this may be a code breaking change.

r_c has also gained helpers for all of these methods...

# object

# collection

New Instantiation Syntax

Another thing Osh brought up with me, during our back and forth, was his concern with r_c's inheritance syntax. Since there isn't any particular reason for that syntax other than the fact that I like it, I have added an alternative. Just say resource_controller, and you'll get the exact same effect as inheriting from ResourceController::Base.

class PhotosController < ApplicationController

Just make sure that you call the resource_controller method before you use any other r_c features.

Get It!

1.2.3+ Compatible

svn export vendor/plugins/resource_controller

Edge/Rails 2.0 Compatible

svn export vendor/plugins/resource_controller

If you're looking for older versions, try browsing

For more info, see the resource_controller section of my blog.

Come join the discussion on the mailing list.

resource_controller update: Bug Fix, Generators, and Google Group!

Oct 25, 2007

A week after I released resource_controller, there are a few issues that needed to be resolved.

Specifying Actions

Hampton pointed out the fact that I hadn't mentioned any way of specifying which actions a controller should respond to. You might want to have a controller that only responds to show, and index, for example. It was possible (and documented - see the RDocs), but somewhat broken. This release fixes that bug.

You can either provide a list of actions, or use :all, :except => syntax.

class PostsController < ResourceController::Base
  actions :show, :index
  ## .. or ..
  actions :all, :except => :index


resource_controller comes with a scaffold_resource generator now. In the 1.2.3+ stream, it will override the built-in version. I figure - if you're using resource_controller, you probably won't have much need for the default one.

It's pretty clever, too. If you have Haml installed, it generates Haml. If you have Shoulda installed, it spits out functional tests that make use of Shoulda's wonderful should_be_restful macro (note: I'm not necessarily opposed to doing the same for RSpec, if there's demand - let me know). Otherwise, it spits out erb, and the same functional tests as the generator included with rails.

The syntax is exactly the same as the built-in generator.

script/generate scaffold_resource post title:string body:text

Get It

1.2.3+ Compatible

svn export vendor/plugins/resource_controller

Edge/Rails 2.0 Compatible

svn export vendor/plugins/resource_controller

If you're looking for older versions, try browsing

For more info, see the release announcement.


I also wanted to announce that I've created a google group for resource_controller.

Come join it, and discuss what can be done to make r_c even better!

Introducing resource_controller: Focus on what makes your controller special.

Oct 19, 2007

If you're using rails, and you've kept up with what's happening in the community, and in the framework, you're probably writing RESTful apps. You may have noticed, like I did, that most of your controllers follow the same basic pattern - the one that the scaffold_resource (just scaffold in edge/2.0) generator spits out.

I wanted a great way to hide that pattern, and describe the unique features of my controllers through a more domain-specific API. Since I tend to write a lot of nested, and polymorphic resources, I also wanted to DRY up all of that code, and make generating urls a lot more intuitive. Finally, I wanted all of that code to be well-tested, so that I could count on hacking it up when I needed new features, without breaking things.

So, I created it, and here it is.


A basic, out of the generator resource just requires inheriting from ResourceController::Base.
class PostsController < ResourceController::Base  


Before/after callbacks, responses, and flash messages are managed through an extremely flexible API. I tried my best, here, to make syntax succinct when your needs are succinct, and allow API calls to be organized in an intuitive way. What I came up with was a scoping system, where calls to the various API methods can be chained, or scoped with a block.

Take the create action, for example. You might want to add a before block, to assign the author attribute of your post model to the current user.
class PostsController < ResourceController::Base
  create.before do << current_user
As your application matures, you may want to add an RJS response to your create method, for some AJAXy goodness. Easy.
class PostsController < ResourceController::Base
  create.before do << current_user
    response do |wants|
Actually, there's even a shortcut for adding responses. You can omit the response block, if you just want to add responses to what's already provided by resource_controller (just HTML) or has been added elsewhere.
class PostsController < ResourceController::Base
  create do
    before do << current_user
Since I commonly use the same RJS template for several actions, I've found this short-form really handy, because I can use a one-liner to add the same response to several actions.
class PostsController < ResourceController::Base
  [create, update].each { |action| action.wants.js {render :template => "post.rjs"} }
Oh yeah, and you can change the flash too.
class PostsController < ResourceController::Base
  create.flash "Wow! RESTful controllers are so easy with resource_controller!"
For more API details, and examples, see the RDocs.


Helpers are used internally to manage objects, generate urls, and manage parent resource associations.

If you want to customize certain controller behaviour, like member-object, and collection fetching, overriding helper methods is all it takes.

Note:For certain resource_controller functionality to work properly, user-defined helpers must make use of the other r_c helpers. The following examples do not follow that convention for clarity purposes - see the docs for more details.

If you wanted, for example, to use a permalink for your post, you'd need to alter the way that posts are fetched. Just override the object method.
class PostsController < ResourceController::Base
    def object
      @object ||= Post.find_by_permalink(param[:id])
You'll probably also want to add pagination to your index method. In the same way we altered member object fetching, we can change collection fetching behavior.
class PostsController < ResourceController::Base
    def collection
      @collection ||= Post.find(:all, :page => {:size => 10, :current => params[:page]})
Details and examples in the RDocs.

Namespaced Resources

...are handled automatically, and any namespaces are available, symbolized, in array form from the namespaces helper method.

Nested Resources

Again, handled automatically. This can be a real pain, and it's a lot easier with r_c. With an ActiveRecord-like syntax, just say belongs_to :model, and resource_controller takes care of the associations for you.
class CommentsController < ResourceController::Base
  belongs_to :post

Polymorphic Resources

This is a concept that can be found in a lot of my apps. Prior to resource_controller, it was a real pain on various levels. I solved some of my problems with urligence, and took things the rest of the way with resource_controller. It really does pretty much everything for you.

Just use the belongs_to syntax in your controller, and r_c infers whichever association (if any) is present when an action is called. The arguments passed to belongs_to are single possible parents; there is no support for deeply nested resources (although, I'm not necessarily opposed to adding it, if there is demandwe).
class CommentsController < ResourceController::Base
  belongs_to :post, :product
In the above example, the controller will automatically infer the presence of either a parent Post, or Product, and scope all the comments to that parent. The controller will also respond without a parent if you have that in your routes.

Thanks to urligence, generating urls in your polymorphic controller's views is really easy. The object_url, and collection_url helpers will maintain your parent resource's scope automatically.
# /posts/1/comments
object_url          # => /posts/1/comments/#{@comment.to_param}
object_url(comment) # => /posts/1/comments/#{comment.to_param}
edit_object_url     # => /posts/1/comments/#{@comment.to_param}/edit
collection_url      # => /posts/1/comments

# /products/1/comments
object_url          # => /products/1/comments/#{@comment.to_param}
object_url(comment) # => /products/1/comments/#{comment.to_param}
edit_object_url     # => /products/1/comments/#{@comment.to_param}/edit
collection_url      # => /products/1/comments

# /comments
object_url          # => /comments/#{@comment.to_param}
object_url(comment) # => /comments/#{comment.to_param}
edit_object_url     # => /comments/#{@comment.to_param}/edit
collection_url      # => /comments
More in the RDocs.

Getting it

resource_controller is available under the MIT License.

It is currently available in two streams:

1.2.3+ Compatible

Install it:

svn export vendor/plugins/resource_controller

SVN (stable):

SVN (ongoing):

Note: If you want to run the tests, cd in to the test directory, and type rake test.

Edge/Rails 2.0 Compatible

Install it:

svn export vendor/plugins/resource_controller

SVN (stable):

SVN (ongoing):

Note: If you want to run the tests in the edge-compatible version, cd in to the test directory, and type rake rails:freeze:edge. I didn't want people to have to download all of edge rails just to get the plugin.

Also, check out the rdoc


Drop me a line and let me know what you think. I'd love to hear your suggestions for API or implementation improvements, or anything else!

Rails polymorphic url generation sucks. Here's something better.

Oct 11, 2007
If you've ever written a rails app with a lot of polymorphic resources, you've probably noticed that generating urls for your polymorphic routes is a real pain. In edge rails, some of the problems have been solved by the new polymorphic url methods. Unfortunately, those solutions are both inadequate, and "not recommended for use by end users" (see DHH's comment at the bottom). The problem is that if you want to use the same views and controller code without a lot of ugly conditionals to generate your urls, you're out of luck.

The Problem

## routes.rb

map.resources :photos
map.resources :users, :has_many => :photos
map.resources :tags, :has_many => :photos

## photos_controller.rb

class PhotosController
  ## in the create method
  wants.html { redirect_to url_for(url_helper_params) } # I know DHH doesn't want us to use this, but how else can I get this effect?

  ## in the destroy method
  wants.html { redirect_to photos_url } # unfortunately, this will drop any parent paths we may have
                                        # a user who called DELETE /users/james/photos/1 ends up at /photos instead of /users/james/photos)

    # Note: I actually use a modification to make_resourceful to do this far more cleanly, but wanted to be as complete as possible in this example.
    def parent_object
        when params[:user_id] then User.find_by_id(params[:article_id])
        when params[:tag_id] then Tag.find_by_id(params[:news_id])
    def url_helper_params
      [parent_object, @photo].reject { |obj| obj.nil? } # we have to get rid of any the parent_object if it's nil, or else this will fail :(

# index.rhtml

<%= link_to "New Photo", new_photo_path %> # same problem as our destroy redirect
We have routes like /users/james/photos, and /tags/montrealonrails/photos. We want to seamlessly maintain the top level of the path without resorting to conditionals or repetition. That is, if you browse to /users/james/photos, and click on the show link, you should go to /users/james/photos/1, not /photos/1. There should also be a polymorphic plural url helper. We have url_for for members (/photos/1), but for collections (/photos), we're stuck resorting to calling the url helpers themselves. So, to create a collection url, we'd have to check what the parent object was, and then call the appropriate helper based on that. And, all of this is further complicated when there are namespaces (/cms/products). Wouldn't it be great if there was a better way?

Stupid Simple Urls

Try saying that three times fast. Done? Good, because it's not called that - I was just messing with you.


Intelligent url building.
## routes.rb

map.resources :photos
map.resources :users, :has_many => :photos
map.resources :tags, :has_many => :photos

map.namespace :admin do |admin|
  admin.resources :products, :has_many => :options
  admin.resources :options


assert_equal "/photos", smart_url(:photos)
assert_equal "/photos", smart_url(nil, nil, nil, :photos) # in case your parent objects aren't there
assert_equal "/users/james/photos", smart_url(User.find_by_name(:james), :photos)
assert_equal "/users/james/photos/1", smart_url(User.find_by_name(:james), Photo.find_by_id(1))
assert_equal "/photos/1", smart_url(nil, Photo.find_by_id(1)) # in the case that your parent object isn't there
assert_equal "/photos/1/edit", smart_url(:edit, Photo.find_by_id(1))

assert_equal "/admin/products", smart_url(:admin, :products)
assert_equal "/admin/products/1", smart_url(:admin, Product.find_by_id(1))
assert_equal "/admin/products/1/options", smart_url(:admin, Product.find_by_id(1), :options)
assert_equal "/admin/products/1/edit", smart_url(:edit, :admin, Product.find_by_id(1))
assert_equal "/admin/options", smart_url(:admin, nil, :options) # in the case that your parent object isn't there
It's pretty straightforward. Just call smart_url with any parameters that might be there, including ones that might be nil, and symbols at the beginning for namespaces (simple_url/cms/products), member actions (/products/1/edit), or both (/cms/products/1/edit), objects in the middle, and a symbol at the end for a resource's index method (/products). That's it.

Get It

Get version 0.1 at Oh, and don't take the name of the tag too seriously - I just wrote it tonight.

svn export vendor/plugins/urligence

Let me know what you think of it.

Testing Misconceptions #1: Exploratory Programming

Oct 05, 2007
So, my last essay on testing was ycombinatored, and then reddited the next day. Cool! I'm honored to have so many people reading, and discussing my article. It's a pleasure.

I found some of the discussion very interesting. It seems like a lot of developers still don't believe in unit testing their code. In fact, many made arguments that questioned, or even outright dismissed the value of unit testing (for more such comments, see the reddit and ycombinator threads, or the thread of comments on the article itself). What surprised me most, though, was the number of misconceptions people have about what testing actually is, why we test, and how long it takes. Many, if not most, of the anti-testing arguments are based on entirely false premises.

In this on-going series, I'll put those misconceptions to the test (pun intended), and provide my take on what the truth is.

Testing Myth #1: I can't test first, because I don't have an overall picture of my program.

BTUF (Big Test Up Front) incurrs [sp] many of the same risks as BDUF (Big Design Up Front). It assumes you are creating artifacts now that will last and not change drastically in the future.
Yes, TDD implies that there is a more or less exact specification. Otherwise, if you're just experimenting, you would have to write the test and your code, and that's going to make you less inclined to throw it away and test out something else (see "Planning is highly overrated").
When I really have latitude in my goals, my code is just about impossible to pin down until it's 95% implemented.
How can you test something if you don't even know how or if it works? You need to hack on it and see if you can get things going before you nail it down, no?
According to this group, testing first is impossible because they're not sure exactly what they're writing. Some of them go so far as to equate testing first with big up front designs. The assumption, in both cases, is that writing your tests first means writing all of your tests first, or at least enough to require a general overview of your program. Nothing could be further from the truth.

It seems likely to me that this group's misconception stems from mixing up unit testing with acceptance testing. Acceptance testing, whether automated or manual, would require an overall specification for how (at least some major portion of) the system should function. Nobody is suggesting that you write your acceptance tests first.

Unit tests verify components of your program in isolation. They should be as small as possible. And, in fact, if your unit tests know enough about your program that they're starting to look like acceptance tests, their effectiveness is going to be diminished considerably. That is, you don't want your unit tests to have an overall picture of what you're building. They should have as little of that picture as possible.

Separating Concerns

Writing tests first doesn't mean you can't explore. It means that the exploration process happens in your tests, instead of your code - which is great! In your tests is where the exploration process belongs.

When you explore in your implementation code, you're trying to answer two questions at once: "What should my code do?" and "What's the best way to implement my code's functionality?". Instead of trying to juggle both concerns at once, testing first divides your exploration in to two stages. It creates a separation of concerns. You might even say that TDD is like MVC for your coding process.

You begin your exploration, of course, by putting together some preliminary tests for the first bit of functionality you're going to write. By considering the output before the implementation, you gain several advantages. The classic example, here, is that you get the experience of using your interface, before you've invested any time in bad API design ideas you may have had. But wait, there's more!

You also get the opportunity to focus on what your code will do. Before I began practicing TDD, I would regularly be almost all the way through writing a block of code before I realized that the idea just wasn't going to work. The thing is, when you're exploring, and you're focused on one or two implementation lines at a time, the result of the code becomes an afterthought. By spending that minute or two up front thinking about what should come out of your code, you'll save yourself a ton of backtracking, and rethinking later on.

That's all for today

I hope you enjoyed the first installment of Testing Misconceptions. I'd love to hear your feedback, or ideas for topics. Please feel free to leave them in the comments, or shoot me an email. Please check back for more episodes.

Montreal on Rails #3

Oct 03, 2007
With our new digs, things were better than ever at last night's MoR. It's really great to see so many people sharing their knowledge, and just participating in the Montreal community. Aside from my presentation, Gary, and Francois both gave outstanding talks.

Javascripting for Rails Devs

Gary Haran talked to us about how to use prototype.js to ease some of the pains of writing javascript, and get yourself a beautiful ruby-like API. Since Gary taught me everything I know about Javascript while I was working at ZipLocal, I have been pushing him to share his knowledge with the masses, and he finally did with last night's presentation. Javascript (especially when combined with prototype) is awesome, and I hope this will be the first of many presentations from Gary on how to harness the awesome power in JS with less of the pain.

Piston: Vendor branch management made simple

François Beausoleil, the author of Piston, the excellent vendor branch management gem, was nice enough to come all the way to Montreal, and talk to us about his creation. In a nutshell, Piston aims to solve all of the problems that you get, when you depend on svn:externals to manage plugins in your rails app. Things like a new revision of a plugin breaking all your tests, and blocking deployment of a critical bug fix come to mind. I had heard a lot about Piston before his talk, but I am now fully sold, and will spend some time converting my current apps to use it this afternoon.

After all the talks were over, I had some great conversations with some of the awesome Montrealers who attend these great events. I talked Giraffes with Mat, iPhone(s) with Carl, number puzzles with Daniel, debated test macros with Marc-André, and discussed the latest and greatest in photography world with Alain Pilon. All in all, a great evening. Thanks to Fred and Ben for the space!

For more on last night's event, check out the photos, and my post about my presentation. As I said in my other post, video of last night's presentations will be available very soon.

Plugins I've Known and Loved #1

Oct 03, 2007
Last night at Montreal on Rails, I gave a talk about two of my favourite Rails plugins: make_resourceful & shoulda. For those of you who were unable to attend, the videos will be available very soon for your consumption.


By encapsulating the standard RESTful controller pattern, make_resourceful allows you to focus on what's really important in your controllers, and saves you a ton of keystrokes along the way. The stable version is available here. Although, I prefer to live on the edge. Also, please join the discussion.


Shoulda is the awesome set of test macros, and extensions to test/unit by the folks over at thoughtbot. A major time and keystroke saver for test-driven developers. (You are writing tests, right? No? Start here.) So, get your shoulda here, and be sure to check out the rDocs and tutorials.

Also, please check out the presentation slides, and the source code from my demo (note: see here for where I started, and HEAD of trunk is roughly where I ended). Also, see my review of MoR#3.

Ruby 1.8.5 *_eval Methods are subject to access modifiers on self!

Sep 28, 2007
Since we're running CentOS5 on our svn/CC.rb server. There are no packages for Ruby 1.8.6, so naturally, I just went with the v1.8.5 install. Everything was working great, until I came across this gotcha. Since I can't seem to find anything documented anywhere, hopefully other people will be able to find this article, when they google for it.

In ruby 1.8.5, self, in an instance_eval block, is subject to the same access modifiers as any old instance variable. Protected, and private methods raise "NoMethodError: protected/private method `x' called for ...". Luckily, though, the send(:method_name) hack still works to get around any access modifier. So, if you need to get your code running on 1.8.5, that's the workaround.
class Person
    def something_secret
      "don't tell anybody"
    def something_really_secret
      "REALLY don't tell anybody"
## Ruby 1.8.5

person.instance_eval { self.something_secret } # NoMethodError: protected method `something_secret' called for ...

person.instance_eval { self.something_really_secret } # NoMethodError: protected method `something_really_secret' called for ...

person.send :something_secret # "don't tell anybody"
person.send :something_really_secret # "REALLY don't tell anybody"

## Ruby 1.8.6

person.instance_eval { self.something_secret } # "don't tell anybody"
person.instance_eval { self.something_really_secret } # "REALLY don't tell anybody"

person.send :something_secret # "don't tell anybody"
person.send :something_really_secret # "REALLY don't tell anybody"

association_proxy.is_a?(Array), acts like an array, but doesn't save like an array

Sep 26, 2007
So, how many times have you had the urge to write:
Lots? Me too. So, I think it's worth clearing the air on our good friend association proxy. He really, really wants you to believe he's an array. He'll even tell you he is.
assert @post.comments.is_a?(Array)
But, don't believe him, because a lot of his array functionality is useless to us. Your favourite rubyisms (reject!, select!, etc) will modify the collection, but will not modify the associations. This test passes:
def test_association_proxy_really_looks_like_an_array_but_it_isnt
  assert Post.find(1).comments.is_a?(Array)
  assert Post.find(1) > 0
  post = Post.find(1)
  assert_equal [],
  assert Post.find(1) > 0
Instead, of course, you have to use the delete method.
Association proxy is tricky.

Promiscuity for Fun and Profit

Aug 30, 2007
Sometimes, we advanced rails users take our knowledge of the framework for granted, have unfair expectations for less advanced rails coders. The worst examples of such unreasonable demands can be found on a daily basis in the IRC channel (#rubyonrails on Freenode, for the uninitiated). And, while the IRC channel can certainly be harsh, even those of us who don't throw around the term "noob" like it's going out of style sometimes (often) complain more than we help. As one of my peers likes to remind me, I'm present in that category more often than I'd like to admit (if I didn't have to). As an aside: While Gary is often the one I'm ranting to, his code is never the subject of the rant. I'd really like to turn my negative energy in to something positive. This will be the first of many articles that seek to add to the pool of resources on rails best practices, shortcuts, tips, and tricks. I hope you like it.

Today's Topic: Polymorphic Associations

It was a check-in today that sparked the round of complaining that sparked the round of Gary telling me to shut up that sparked the desire to write this article. The check-in added a bunch of tables to a project, with a schema that looked (something) like this:
create_table :cities do |t|
  t.column :name, :string

create_table :city_aliases do |t|
  t.column :city_id, :integer
  t.column :alias, :string

create_table :provinces do |t|
  t.column :name, :string

create_table :province_aliases do |t|
  t.column :province_id, :integer
  t.column :alias, :string
Well, it's definitely not the worst schema ever, but it can be improved drastically with a polymorphic association. What the heck is that, you ask?

Promiscuous Models

Essentially, a polymorphic association is a model that can belong_to any other table. Polymorphic associations are the way that a lot of popular plugins work. Ever wonder how acts_as_taggable just magically adds tagging to any model, without any migration? Well, it's super easy. Just replace your duplicate tables with one of these babies:
create_table :name_aliases do |t|
  t.column :aliasable_type, :string
  t.column :aliasable_id, :integer
  t.column :name, :string
You'll need a model, too:
class NameAlias < ActiveRecord::Base
  belongs_to :aliasable, :polymorphic => true
Note: I chose the name NameAlias because alias is a reserved word in ruby, and it may cause you some problems if you try to name a model that.

Going back to our example, we can redefine our models as follows:
class City < ActiveRecord::Base
  has_many :name_aliases, :as => :aliasable

class Province < ActiveRecord::Base
  has_many :name_aliases, :as => :aliasable

Now What?

A polymorphic association works just like any normal association. That means you can use all of your favorite AssociationProxy methods, and everything.
City.find_by_name("montreal").name_aliases.create :name => "mtl"
...will work just as it always has.

How'd you do that?

When you tell rails that :polymorphic => true, in your belongs_to options, it knows that it needs to store the id and the type of the associated class. You give the association a generic name like aliasable or taggable, to let rails know what the fields are named in the database. For example, if you call your association aliasable, you need to have the fields aliasable_type and aliasable_id. Make sure aliasable_type is a long enough string column to store the name of your longest model in it. In production, both of these fields should probably be indexed. When you associate a class with your polymorphic model, you have to tell it to associate as your generic association name, instead of its own, to let rails know how to join the tables up properly: :as => :aliasable.

Name Aliases that Rock

Because this example is a particularly useful one, it's worth showing you a method that I write often. It will work with any association, but I often use it with aliases. It adds an extra finder, which allows you to find_by_name_alias.
class Person
  has_many :name_aliases, :as => :aliasable
  def self.find_by_name_alias(name)
      Person.find(:all, :include => :name_aliases, :conditions => [" = ?", name])

Feedback, please!

So, there you have it: polymorphic associations. This is where you come in. I have some questions:
  • How'd you like it?
  • What could be improved?
  • Should I use a different format (like a screencast)?
  • Are there any topics you'd like to see?
Thanks for reading.

We don't write tests. There just isn't time for luxuries.

Aug 22, 2007
Over the last few weeks and months, I've often had the displeasure of being reminded of the common opinion that writing automated tests slows down the development process. "We just didn't have time to write tests" or "Nope, we don't have any test coverage... we just didn't have time." are common expressions of this problematic trend in software development. In order to disregard something widely accepted as being a necessary practise, writing automated tests must really take forever, right? Wrong. I call this the testing-slows-us-down argument, and, frankly, I don't buy it at all. Here's why.

Everybody Tests

Everybody has to test their code. It's just that those of us who write automated tests write code to do our testing, where non-testers use humans (themselves, usually) to manually verify correct behavior. So, we can be sure that the testing-slows-us-down argument rests on the premise that manually verifying behavior is faster than writing automated tests.

Investing Time

The two methods of testing distribute your time investment differently. Since automated tests run very quickly, the time investment is made in writing them. Running the tests is nearly instantaneous. With manual testing, it is the opposite. Designing the tests takes nearly zero time, whereas actually running the tests takes a measurable amount of time each time you need the test. So, with automated testing, you get most of your time investment out of the way at the outset. Once it's written, it's written. With manual tests, your time investment grows each time you test for something. In order for the testing-slows-us-down argument to remain valid, the total time investment made on manual testing must be less than the time investment required to write the automated tests.

In these terms, the testing-slows-us-down argument can be expressed as follows: The time it takes to write the automated tests for a feature is greater than the total time that will be spent manually testing that feature throughout the lifetime of the project.

Since one of my goals for this article is proving the "no time to test, we need to launch our product" people wrong, I want to show that this (short-sighted) argument is as, if not more, flawed than the testing-slows-us-down argument. I'm going to phrase the product-launch argument as follows: The time it takes to write the tests for a feature is greater than the total time that will be spent manually testing that feature until product launch.

Breaking Even

If it takes twenty minutes to write your automated tests, and one minute to test your feature manually, the break even point for an automated tester is when they've run their tests twenty times. After that point, automated tests become cheaper and cheaper (per test). Manual tests continue to become increasingly expensive over their lifetime. So, the testing-slows-us-down argument can be rephrased once more, making use of our new terminology: The automated testing break even point will not be reached before product launch.

Developers Developers Developers

Because the cost of manual testing grows considerably each time you run your tests, bringing on extra developers adds a multiplier to your (growing) test cost. Anytime any developer wants to merge their branch back in to trunk, they're going to have to take a look over the whole source tree to make sure they didn't break anything. This means, for the reasonably attentive developer, testing code that they didn't write - going over all of the features. It's the manual testing equivalent of running the whole test suite. Not only is this a time-consuming process, but it is incredibly error prone.

Debugging Time!

No developer can be reasonably expected to remember, and meticulously verify all of expected functionality from an application, at each check-in. That's a superhuman expectation. You might even say it's a job better suited to a machine? Seriously, though, code is interdependent. Changes in one area can have impacts all over the application. When relying on humans to verify application behavior, a lot of bugs are going to slip through the cracks.

More bugs means more time spent debugging, which, incidentally, means more time spent testing, for the manual tester. Moreover, it's not uncommon for the same bug to surface repeatedly. We've all seen it. It's called regression, and it's why us automated testers have something wonderful called regression tests.

With every bug, automated testers (like always), make a one time investment. Since automated testing is far more likely to prevent regressions than manual testing, the time benefits here are two-fold. First, once the regression test is written, it's written, and the behavior doesn't need to be repeatedly verified by hand. Second, the bug is far less likely to resurface. Manual testers may argue, here, that they are capable of adding the regression tests to their regular passes over their code, keeping the bugs out, just the same. While this may be the case for the most superhuman of individuals, the liklihood that an entire team may be capable of such incredible manual testing is very low.

So, as a general rule, I think it's safe to say that automated tests significantly reduce debugging time, by providing a much higher degree of accuracy, and acting as a powerful weapon for preventing regression.

Release Already

It seems pretty clear to me, after a thorough analysis of the testing-slows-us-down argument that its proponents are, at the least, misguided. Automated testing is at least as fast as manual testing. In writing this article, I thought a lot about why so many people have this common misconception. I think it mostly stems from one of the following.

The most common cause of this misconception is likely naivete. Many of the challenges that really bring out the best in automated testing (and, consequently, the worst in manual testing) are far more evident with bigger projects. While I maintain that automated testing is at least as fast as manual testing on all projects, it's likely that bigger projects will see much bigger benefits. The problem, here, is that big projects often start out as small ones. And, unfortunately, growing pains can cause some of the most difficult problems with keeping software working properly.

My assumption is that the ones who aren't naive are just lazy. Learning how, what, and when to write automated tests can be a difficult undertaking, but it's well worth it. Like writing the tests themselves, a little bit of up-front investment in your skillset will save you loads of time, and headache later on. So, do yourself a favor, and learn to test. You'll thank yourself for it.

Update: See my first response in a new series to some of the discussion surrounding this article.

3 ways to improve your bullshit methodology

Aug 10, 2007
Marc Cournoyer writes a great post, detailing 5 ways to know whether your methodology is working (or whether its bullshit), specifically re: TDD/XP.

I have been trying to improve my TDD practice for some time now. I am slowly getting better at writing tests first (and just writing tests, of course), but it does represent quite a significant shift in thinking. And, when you're used to writing the code first, as Marc says, that's where you're naturally going to go when the pressure is on. So, how do we stop this behavior? How do we get in to the test-first zone?

Here are 3 things that have started working for me:

1. When you're stuck on what to test, make a list of possible inputs and selected outputs

One of the biggest challenges for me has been overcoming my tendency towards doing something like exploratory testing of my own code, as I write it. This was the bad habit of not knowing what my code was going to do, before I wrote it. I'd spend some time fiddling around with a few lines that I thought might accomplish what I wanted, and looking at output, until it looked right (sound familiar?). With TDD, you have to start by thinking about what your code will output.
Take a second before you write any tests, and make a list of input parameters, and output expectations. Once you have this list, you'll see that it is much easier to know what you need to test for, and it will even help you write your code afterwards, too. This is an easy one, but it illustrates the point:
# PostsController#show
# Inputs:
#   params[:id]
# Outputs:
#   @post <-- contains the Post which corresponds to the params[:id] input parameter
#   OR
#   throws ActiveRecord::RecordNotFound if Post w/id == params[:id] does not exist

2. Make it an exercise and practice, practice, practice

Take 2 hours at home, in your spare time, and give yourself too much to do. Outline more features than you can realistically implement in that timeframe, and go for it. Racing the clock helps, because that's what you'll be facing on a real project. It sounds cheesy, but it has really worked well for me.

3. Use autotest

(the direct link is here, but it seems to be down right now)

The easier, and more comfortable testing is, the more likely you are to do it. Autotest watches all of your files, and when one changes, it runs the appropriate tests. All you have to do is save the relevant file, and look over at your terminal window to see the results. No more hitting refresh in your browser, or even running tests manually.
Marc also told me about CruiseControl.rb. I haven't had a chance to play with it yet, but it looks very cool. The idea is that if somebody checks something in to source control that breaks any tests, they are alerted immediately. Anything that makes testing easier is probably better.

What methodology-improvement tips do you have?

When to test

Aug 10, 2007


A lot of people seem to struggle with knowing when to test their code. The simple answer is always, and everything.

Always test every line of code that you write.

Know that your code works (and interacts) as expected

First and foremost, writing tests provides you with a safety net. Making changes and refactoring code becomes a calculated maneuver, instead of a guessing game. Stop asking yourself: Did I break something? And know that if the tests didn't break, you're safe.

When you first write a block of code, manual testing might be sufficient to ensure that it works correctly, because its purpose is fresh in your mind. But, what about all the code it interacts with? Can you possibly remember test for all of the things you might have affected? Can the next user of your code remember to test for all of the things they might have affected? Of course not, and that's exactly why it's up to you to build those tests for them.

Living proof

Take a look at this snippet that got checked in to svn today:
belongs_to :user, :company
Had the author of this code written tests for his code, or even run the generated test suite for his model, he might have realized that he'd made a typo, and that the belongs_to method doesn't accept multiple models as its first argument, and that the correct code would have looked like this:
belongs_to :user
belongs_to :company
Test first, test always.

Super DRY Resources

Aug 09, 2007
Once again, I'll be responding to one of Marc's great posts. Today, he's talking about keeping your controllers DRY, using before_filters. For even less repetition in your controllers, the make_resourceful plugin really takes the cake. You can replace this:
class RecipesController &lt; ApplicationController
  # GET /recipes
  # GET /recipes.xml
  def index
    @recipes = Recipe.find(:all)

    respond_to do |format|
      format.html # index.rhtml
      format.xml  { render :xml => @recipes.to_xml }

  # GET /recipes/1
  # GET /recipes/1.xml
  def show
    @recipe = Recipe.find(params[:id])

    respond_to do |format|
      format.html # show.rhtml
      format.xml  { render :xml => @recipe.to_xml }

  # GET /recipes/new
  def new
    @recipe =

  # GET /recipes/1;edit
  def edit
    @recipe = Recipe.find(params[:id])

  # POST /recipes
  # POST /recipes.xml
  def create
    @recipe =[:recipe])

    respond_to do |format|
        flash[:notice] = 'Recipe was successfully created.'
        format.html { redirect_to recipe_url(@recipe) }
        format.xml  { head :created, :location => recipe_url(@recipe) }
        format.html { render :action => "new" }
        format.xml  { render :xml => @recipe.errors.to_xml }

  # PUT /recipes/1
  # PUT /recipes/1.xml
  def update
    @recipe = Recipe.find(params[:id])

    respond_to do |format|
      if @recipe.update_attributes(params[:recipe])
        flash[:notice] = 'Recipe was successfully updated.'
        format.html { redirect_to recipe_url(@recipe) }
        format.xml  { head :ok }
        format.html { render :action => "edit" }
        format.xml  { render :xml => @recipe.errors.to_xml }

  # DELETE /recipes/1
  # DELETE /recipes/1.xml
  def destroy
    @recipe = Recipe.find(params[:id])

    respond_to do |format|
      format.html { redirect_to recipes_url }
      format.xml  { head :ok }
...with this:
class RecipesController < ApplicationController
  make_resourceful do
    actions :show, :index, :create, :edit, :update, :destroy
That's just the scaffolding. What if I want to start using a permalink for my model, to satisfy the SEO department?
def current_object
...but, now my param is called id, and that's not really very accurate. Can I change it?
def current_param
...what about paging? (using the paginating_find plugin)
def current_objects
  current_model.find(:all, :order => "created_at DESC", :page => {:current => params[:page], :size => 10 } )
...what about all of my fancy respond_to blocks, and RJS tricks?
response_for :show do |format|
...WOW. Where can I get it? Here.

Learn a little bit more here (pdf).