About

James Golick

James Golick is an engineer, entrepreneur, speaker, and above all else, a grinder.

As CTO (or something?) of BitLove, he scaled FetLife.com's traffic by more than an order of magnitude (and counting).

James spends most of his time writing ruby and scala, building infrastructure, and extinguishing fires.

He speaks regularly at conferences and blogs periodically, but James values shipping code over just about anything else.

Latest Tweets

follow me on Twitter

James on the Web

The Crunch Mode Paradox: Turning Superstars Average

Feb 16 2008

We've all been there — a product near (or not so near) ready to launch. Bosses breathing down your neck. Deadline soon approaching. People start to work evenings, then weekends. It starts to feel, perhaps rightly so, like nobody is doing anything other than working. Crunch mode is the time when everybody buckles down, and focuses on the product, and only the product. Forget about everything else.

Crunch mode is a result of the belief that more hours spent coding means more software gets written. The obvious problem with management's formula is that a tired programmer can do more harm than good. A particularly nasty bug, for instance, can cost a team orders of magnitude longer to fix than it took to write. So, while it may be possible to increase the total number of lines of code written, it only seems reasonable to measure shippable lines of code as a metric for productivity. Nasty bugs that get written by tired, over-worked developers should be counted against total output, rather than for it. In crunch mode, managers tend to overlook the bugs, and see productivity as increasing, while it's really decreasing. That's the crunch mode paradox.

Why Does it Happen?

You've assembled a team of all-star programmers. They're going to practice test-driven development, continuous refactoring, and other agile processes. They are code artists. Indeed, your team of developers is obsessed with 'beautiful code', and whether it's artistry, or OCD, they're not happy until code is perfect. They write shippable code.

The key to understanding why your team of carefully selected programmers breaks down in crunch mode is understanding what makes them tick in normal mode. Writing shippable code means being in a constant state of careful thought. Each change to the code is a calculated maneuver, the programmer's brain always working hard to discover new patterns of duplication that may have emerged in the code base, and promptly abstract them. The process of continuously searching for a better way — for better abstraction — is a major key to writing high quality code. As with anything else, abstraction can become a pitfall, but used properly, it means writing less code, easier testing, and perhaps most importantly, fewer places for bugs to come from. Of course, continuous refactoring, and code reuse aren't the only pieces of the puzzle, but it's important to note the thought process of the developer who is crafting his code this way.

In normal mode, your superstar developer cares about writing beautiful code; that's all. That's why he makes sure to write a comprehensive test suite, to continuously refactor, and makes sure his code is readable. That is probably the biggest difference between your great developer, and the average developer you went to so much trouble to avoid hiring; your superstar's priority is the code, whereas the average programmer's priority is going home at the end of the day. When you flip that crunch mode switch, though, priorities change.

When you tell your team that there's a deadline for a certain feature set (especially when it's a tight one), the focus is no longer on the code. When everybody is racing to accomplish a set of tasks for a certain date, coming in to work every day is about trying to get as many of those features out the door as possible in as short a time. The pressure is on. People start cutting corners. Processes break down. People write fewer tests, refactor less frequently. You have effectively turned your superstar team in to a group of average programmers.

In factory terms, a worker's production rate decreases over time. A worker who is creating 10 widgets/hour at the beginning of a shift may be producing only 6/hour at the end of the shift, having peaked at 12/hour a couple of hours in. Over time, the worker works more slowly, and makes more mistakes. This combination of slowdown and errors eventually reaches a point of zero productivity, where it takes a very long time to produce each widget, and every last one is somehow spoiled. Assembly-line managers figured out long ago that when this level of fatigue is reached, the stage is set for spectacular failure-events leading to large and costly losses – an expensive machine is damaged, inventory is destroyed, or a worker is seriously injured. (Why Crunch Mode Doesn't Work)

It Doesn't Have to Be This Way

There are two problems with crunch mode: management, and developer. Being a developer, I'm going to go after the pointy haired among us first.

Managers

Crunch mode doesn't work. Sending your team on a death march quickly leads to programmer fatigue, which nearly always leads to bad code. Moreover, developers are likely to become demoralized, burnt out, lose interest in the product, and perhaps worst of all for you, they'll become resentful of management. Moreover, in many cases, due to nasty bugs, and later hits in maintainability and consequent necessary rewrites, the crunch mode developer is outputting less in total. Finally, this sort of crunch mode is nearly always symptomatic of a team that has slipped in to waterfall development.

The great thing about true agile development is that after the first couple of weeks, the code is always in roughly shippable state. So, if you are working on a product that has to launch on a particular date, for whatever reason (be it PR, or anything else), agile is a highly effective way to ensure that you're going to have a shippable product ready. The reality is that trying to shove a specific featureset down the throats of your development team with a hard deadline simply does not work. The software will be alpha-quality if you're lucky.

John Nack, Adobe Photoshop product manager says of their transition to agile:

The team's ability to deliver a public beta--a first for a major Adobe application*--is probably the single greatest tribute to the new method. In the old days, we could have delivered a feature-rich beta, but the quality would have been shaky at best. This time, as Russell says, "The public beta was basically just 'whatever build is ready on date X,'" because we knew it would be stable. (Agile development comes to Photoshop)

Developers

The key to beating crunch mode, in my experience, is forgetting about the pressure being put on you (not easy, I know); you have to work as normal. What happens to us during crunch mode is extremely counter-productive. For some reason, when we want to work faster, we stop doing all of the things that save us time. Testing, refactoring, and general caring for code is what separates ultra-productive developers from average ones. But, best practices the first thing to go when the pressure's on.

Once you've pulled yourself together, and started working normally again, the next step is to start saying no. Obviously, this piece of advice is going to be controversial, but I think it's a discussion worth continuing. As Raganwald says:

Try this: Employ an Engineer. Ask her to slap together a bridge. The answer will be no. You cannot badger her with talk of how since You Hold The Gold, You Make The Rules. You cannot cajole her with talk of how the department needs to make its numbers, and that means getting construction done by the end of the quarter. (What I admire about professions like Engineering and Medicine)

Obviously, nobody is going to die on account of (most) bad software (as they might with a poorly built bridge). But, when you know that you, and others on your team are doing bad work, it's time to speak up. If you don't, you're doing a disservice to yourself, and your company. You're the one who is going to have to maintain this nightmare when crunch mode is over (and we know that there's never really time for a cleanup). Your company is going to have to deal with low quality software, and likely a botched release. Everybody loses.

The Crunch Mode Paradox

Crunch mode is a paradox. Managers think they're fostering increased productivity, but instead, they're damaging morale, the product, and often causing a net decrease in output. Developers try to increase their productivity by cutting all of the corners that make them effective in the first place. For those reasons, the death march has the opposite of its intended effect. Agile lifecycle management seems to be the most effective tool for guaranteeing that shippable software will be ready on a particular date. So, for companies trying to develop software, next time, try agile instead of crunch mode. Everybody will be happier.


Testing Misconceptions #1: Exploratory Programming

Oct 05 2007

So, my last essay on testing was ycombinatored, and then reddited the next day. Cool! I'm honored to have so many people reading, and discussing my article. It's a pleasure.

I found some of the discussion very interesting. It seems like a lot of developers still don't believe in unit testing their code. In fact, many made arguments that questioned, or even outright dismissed the value of unit testing (for more such comments, see the reddit and ycombinator threads, or the thread of comments on the article itself). What surprised me most, though, was the number of misconceptions people have about what testing actually is, why we test, and how long it takes. Many, if not most, of the anti-testing arguments are based on entirely false premises.

In this on-going series, I'll put those misconceptions to the test (pun intended), and provide my take on what the truth is.

Testing Myth #1: I can't test first, because I don't have an overall picture of my program.

BTUF (Big Test Up Front) incurrs [sp] many of the same risks as BDUF (Big Design Up Front). It assumes you are creating artifacts now that will last and not change drastically in the future.
Yes, TDD implies that there is a more or less exact specification. Otherwise, if you're just experimenting, you would have to write the test and your code, and that's going to make you less inclined to throw it away and test out something else (see "Planning is highly overrated").
When I really have latitude in my goals, my code is just about impossible to pin down until it's 95% implemented.
How can you test something if you don't even know how or if it works? You need to hack on it and see if you can get things going before you nail it down, no?
According to this group, testing first is impossible because they're not sure exactly what they're writing. Some of them go so far as to equate testing first with big up front designs. The assumption, in both cases, is that writing your tests first means writing all of your tests first, or at least enough to require a general overview of your program. Nothing could be further from the truth.

It seems likely to me that this group's misconception stems from mixing up unit testing with acceptance testing. Acceptance testing, whether automated or manual, would require an overall specification for how (at least some major portion of) the system should function. Nobody is suggesting that you write your acceptance tests first.

Unit tests verify components of your program in isolation. They should be as small as possible. And, in fact, if your unit tests know enough about your program that they're starting to look like acceptance tests, their effectiveness is going to be diminished considerably. That is, you don't want your unit tests to have an overall picture of what you're building. They should have as little of that picture as possible.

Separating Concerns

Writing tests first doesn't mean you can't explore. It means that the exploration process happens in your tests, instead of your code - which is great! In your tests is where the exploration process belongs.

When you explore in your implementation code, you're trying to answer two questions at once: "What should my code do?" and "What's the best way to implement my code's functionality?". Instead of trying to juggle both concerns at once, testing first divides your exploration in to two stages. It creates a separation of concerns. You might even say that TDD is like MVC for your coding process.

You begin your exploration, of course, by putting together some preliminary tests for the first bit of functionality you're going to write. By considering the output before the implementation, you gain several advantages. The classic example, here, is that you get the experience of using your interface, before you've invested any time in bad API design ideas you may have had. But wait, there's more!

You also get the opportunity to focus on what your code will do. Before I began practicing TDD, I would regularly be almost all the way through writing a block of code before I realized that the idea just wasn't going to work. The thing is, when you're exploring, and you're focused on one or two implementation lines at a time, the result of the code becomes an afterthought. By spending that minute or two up front thinking about what should come out of your code, you'll save yourself a ton of backtracking, and rethinking later on.

That's all for today

I hope you enjoyed the first installment of Testing Misconceptions. I'd love to hear your feedback, or ideas for topics. Please feel free to leave them in the comments, or shoot me an email. Please check back for more episodes.


3 ways to improve your bullshit methodology

Aug 10 2007

Marc Cournoyer writes a great post, detailing 5 ways to know whether your methodology is working (or whether its bullshit), specifically re: TDD/XP.

I have been trying to improve my TDD practice for some time now. I am slowly getting better at writing tests first (and just writing tests, of course), but it does represent quite a significant shift in thinking. And, when you're used to writing the code first, as Marc says, that's where you're naturally going to go when the pressure is on. So, how do we stop this behavior? How do we get in to the test-first zone?

Here are 3 things that have started working for me:

1. When you're stuck on what to test, make a list of possible inputs and selected outputs


One of the biggest challenges for me has been overcoming my tendency towards doing something like exploratory testing of my own code, as I write it. This was the bad habit of not knowing what my code was going to do, before I wrote it. I'd spend some time fiddling around with a few lines that I thought might accomplish what I wanted, and looking at output, until it looked right (sound familiar?). With TDD, you have to start by thinking about what your code will output.
Take a second before you write any tests, and make a list of input parameters, and output expectations. Once you have this list, you'll see that it is much easier to know what you need to test for, and it will even help you write your code afterwards, too. This is an easy one, but it illustrates the point:
# PostsController#show
#
# Inputs:
#   params[:id]
# Outputs:
#   @post <-- contains the Post which corresponds to the params[:id] input parameter
#   OR
#   throws ActiveRecord::RecordNotFound if Post w/id == params[:id] does not exist

2. Make it an exercise and practice, practice, practice


Take 2 hours at home, in your spare time, and give yourself too much to do. Outline more features than you can realistically implement in that timeframe, and go for it. Racing the clock helps, because that's what you'll be facing on a real project. It sounds cheesy, but it has really worked well for me.

3. Use autotest

(the direct link is here, but it seems to be down right now)

The easier, and more comfortable testing is, the more likely you are to do it. Autotest watches all of your files, and when one changes, it runs the appropriate tests. All you have to do is save the relevant file, and look over at your terminal window to see the results. No more hitting refresh in your browser, or even running tests manually.
Marc also told me about CruiseControl.rb. I haven't had a chance to play with it yet, but it looks very cool. The idea is that if somebody checks something in to source control that breaks any tests, they are alerted immediately. Anything that makes testing easier is probably better.

What methodology-improvement tips do you have?