How I Used Kata To Learn TypeScript Fast

I don’t know about you, but I’m not one to turn away early clues to the new direction from .NET A-listers whose lives have recently flashed before their eyes. So I sat up a bit when I came across the following sobering passage on Rocky Lhotka’s blog:

Sadly, as much as I truly love .NET and view it as the best software development platform mankind has yet invented, I strongly suspect that we’ll all end up programming in JavaScript–or some decent abstraction of it like TypeScript. As a result, I’m increasingly convinced that platforms like .NET, Java, and Objective C will be relegated to writing “toy” consumer apps, and/or to pure server-side legacy code alongside the old COBOL, RPG, and FORTRAN code that still runs an amazing number of companies in the world.

I blinked, reread the paragraph a few times, and decided (among other things) that it was high time that I dug in and learned this TypeScript of which Lhotka spoke.

Well. That, and the fact that I had just signed up to show it to Nashville’s .NET User Group within a matter of weeks.

Wax On, Wax Off

Now, I can read friendly, friendly Getting Started tutorials and watch Build talks all day long, but I find that I never really encounter all of the boogers and sharp corners on a new language until I’ve used it to solve a real problem or two. I had a good thing going in this respect for a while, by keeping a pet application. But that approach gets heavy once time has its way with you. I mean, I can’t recommend enough the experience of building something top to bottom on your own, at least once. But what can you do if the Internet is already full, and you just want to learn something?

Enter Kata. A practice borrowed from the martial arts, a kata is a discrete exercise designed to strengthen a skill through repetition. When applied in software, it is a means for nerds to write code in real-world conditions, without so much in the way of the commitment that typically accompanies code in the real world.

Naturally, I turned to kata to help me quickly wrap my head around TypeScript. The kata I chose for this occasion was the game of Fizz Buzz. Also, it’s me, so I came at it using Behavior-Driven Development. This helped me to lay out the game’s requirements, and to remain focused on meeting them as simply as possible.

Since TypeScript is really just JavaScript with a tie on, I was able to bring along my favorite JavaScript tooling–namely, the Jasmine testing framework, and the Chutzpah extension for integrating Jasmine with Visual Studio’s Test Explorer. And, of course, I had to install TypeScript itself, as it has not yet been mainstreamed into the out-of-box Visual Studio experience.

So, here’s how you can recreate my learning experience from the comfort of your own personal code laboratory.

kata

First, do the File, New Project shuffle, and choose the HTML Application with TypeScript template. Add the jasmine.js and jasmine.TypeScript.DefinitelyTyped NuGet packages to your new project, and round out the yak-shaving portion of the session by creating two TypeScript files, one christened fizzbuzz.spec.ts for the tests, and the other fizzbuzz.ts for the implementation.

For bonus #PROTIP points: open both of these and lay them out side by side using two vertical tab groups, and bring up the Test Explorer pane at the right edge of the Visual Studio window.

All righty then, tests first. The basic idea behind the game of Fizz Buzz, according to the Wikipedia article, is as follows.

Fizz buzz (also known as bizz buzz, or simply buzz) is a group word game for children to teach them about division.[1] Players take turns to count incrementally, replacing any number divisible by three with the word “fizz”, and any number divisible by five with the word “buzz”.

So, laying this out in discrete rules in order of increasing complexity, we get:

  1. The player says the next number.
  2. The player says ‘fizz’ if the next number is 3.
  3. The player says ‘buzz’ if the next number is 5.
  4. The player says ‘fizz’ if the next number is divisible by 3.
  5. The player says ‘buzz’ if the next number is divisible by 5.
  6. The player says ‘fizzbuzz’ if the next number is divisible by both 3 and 5.

Set up the spec file like so, adding a test for the first requirement using jasmine’s it() function, inside a describe function.

/// <reference path="scripts/typings/jasmine/jasmine.d.ts"/>
/// <reference path="fizzbuzz.ts"/>
 
describe("The fizz buzz player", () => {
 
    it("says the next number.", () => {
        var player = new FizzBuzzPlayer();
        expect(player.play(1)).toBe("1");
        expect(player.play(2)).toBe("2");
    });
});

Now save the file and run the tests. Since the FizzBuzzPlayer class does not exist yet, the TypeScript compiler will fail the build. Following the TDD cycle of red, green, refactor, this counts as our first red. Resolve this by creating the FizzBuzzPlayer class with in fizzbuzz.ts:

class FizzBuzzPlayer {
    play(n: number) {
        return '';
    }
}

Now that the build succeeds, the test runner should show that the spec fails–the play method does not return “1”. Let’s solve this in the simplest possible way:

play(n: number) {
    return n.toString();
}

The test should now pass. Now let’s write a test for the second requirement:

it("says 'fizz' if the next number is 3", () => {
    var player = new FizzBuzzPlayer();
    expect(player.play(3)).toBe("fizz");
});

This test will fail with the message, “Expected ‘3’ to be ‘fizz'”. Update the play method again to make this pass:

play(n: number) {
    if (n == 3)
        return 'fizz';
 
    return n.toString();
}

Let’s do a bit of cleaning up for the refactor stage. It’s repetitive to initialize the player in each spec, so using jasmine’s beforeEach function, initialize the player in one place, and remove that line from each of the specs:

var player;
beforeEach(() => {
    player = new FizzBuzzPlayer();
});
 
it("says the next number.", () => {
    expect(player.play(1)).toBe("1");
    expect(player.play(2)).toBe("2");
});
 
it("says 'fizz' if the next number is 3", () => {
    expect(player.play(3)).toBe("fizz");
});

Beautiful. Let’s make things red again by adding a test for the next requirement:

it("says 'buzz' if the next number is 5", () => {
    expect(player.play(5)).toBe("buzz");
});

Red, it is: “Expected ‘5’ to be ‘buzz'”. Make it go green by updating our play method:

play(n: number) {
    if (n == 3)
        return 'fizz';
 
    if (n == 5)
        return 'buzz';
 
    return n.toString();
}

There’s no refactoring to be done yet, so let’s move back to red with the next requirement:

it("says 'fizz' if the next number is divisible by 3", () => {
    expect(player.play(6)).toBe("fizz");
    expect(player.play(9)).toBe("fizz");
    expect(player.play(27)).toBe("fizz");
});

Red, indeed: “Expected ‘6’ to be ‘fizz'”. Make this pass by checking for a remainder of 0, rather than equality:

if (n % 3 == 0)
    return 'fizz';

Repeat this for the buzz case.

it("says 'buzz' if the next number is divisible by 5", () => {
    expect(player.play(10)).toBe("buzz");
    expect(player.play(25)).toBe("buzz");
    expect(player.play(100)).toBe("buzz");
});

And to make it pass:

if (n % 5 == 0)
    return 'buzz';

Now define the test for our last requirement:

it("says 'fizzbuzz' if the next number is divisible by both 3 and 5", () => {
    expect(player.play(15)).toBe("fizzbuzz");
    expect(player.play(30)).toBe("fizzbuzz");
    expect(player.play(60)).toBe("fizzbuzz");
});

Once this fails, we can update our play method to solve the problem.

play(n: number) {
    var value = '';
 
    if (n % 3 == 0)
        value = 'fizz';
 
    if (n % 5 == 0)
        value += 'buzz';
 
    if (value === '')
        value = n.toString();
 
    return value;
}

Green! Now, there looks to be a bit of refactoring to be done here. I mean, if you’re into the whole code doing one thing well thing. Which you should be. The first two if clauses are identical, except for the values involved–that looks like a function to me:

private getWordForMultiple(n: number, factor: number, word: string) {
    if (n % factor == 0)
        return word;
    else
        return '';
}

Now we can swap out our repetitive if clauses for calls to this function:

play(n: number) {
    var value = '';
 
    value += this.getWordForMultiple(n, 3, 'fizz');
    value += this.getWordForMultiple(n, 5, 'buzz');
 
    if (value === '')
        value = n.toString();
 
    return value;
}

Run the tests now to be sure that they still pass.

I think we can still go a little tighter than that. Let’s make a getWords method that handles all of the words-for-multiples business in one call:

private getWords(n: number) {
    var value = '';
    value += this.getWordForMultiple(n, 3, 'fizz');
    value += this.getWordForMultiple(n, 5, 'buzz');
    return value;
}

Now our play function can be reduced to:

play(n: number) {
    var value = this.getWords(n);
    if (value === '')
        value = n.toString();
 
    return value;
}

And the tests still pass. Well, that was just a smashing learning exercise, and I thought that the whole thing went smoothly as a–

Your manager just walked out of a meeting. He looks a little ill as he approaches your desk. “We have a new requirement”, he says. “The Fizz Buzz Player needs to be configurable, to accommodate any combination of factors and words.”

Zoinks. It’s a good thing we’ve been keeping our code ship-shape and tested–let’s see what we can do. First, add the new requirement to the specs:

it("can be configured to accommodate any combination of factors and words.", () => {
    var cp = new FizzBuzzPlayer(2, 'jack', 7, 'squat');
    expect(cp.play(1)).toBe("1");
    expect(cp.play(2)).toBe("jack");
    expect(cp.play(3)).toBe("3");
    expect(cp.play(4)).toBe("jack");
    expect(cp.play(5)).toBe("5");
    expect(cp.play(6)).toBe("jack");
    expect(cp.play(7)).toBe("squat");
    expect(cp.play(8)).toBe("jack");
    expect(cp.play(9)).toBe("9");
    expect(cp.play(10)).toBe("jack");
    expect(cp.play(11)).toBe("11");
    expect(cp.play(12)).toBe("jack");
    expect(cp.play(13)).toBe("13");
    expect(cp.play(14)).toBe("jacksquat");
    expect(cp.play(15)).toBe("15");
});

We’re back to build failure, because FizzBuzzPlayer doesn’t have a constructor defined. Let’s get things building again by adding one:

constructor(
    public fizzFactor: number = 3,
    public fizzWord: string = 'fizz',
    public buzzFactor: number = 5,
    public buzzWord: string = 'buzz') { }

Note that not only are we taking advantage of TypeScript’s automatic properties shorthand, but we’re also making these parameters optional by defining default values based on the original Fizz Buzz rules.

Our test still runs red. Let’s fix up the getWords code to actually use the new properties we’ve added:

private getWords(n: number) {
    var value = '';
    value += this.getWordForMultiple(n, this.fizzFactor, this.fizzWord);
    value += this.getWordForMultiple(n, this.buzzFactor, this.buzzWord);
    return value;
}

And we’re back in the green! Time to refactor. Using the right-click menu’s Refactor, Rename option (high-five, TypeScript!), let’s rename FizzBuzzPlayer to something less specific, and more appropriate to what it actually does now: NumberGamePlayer. While you’re at it, update the name in the jasmine describe call, too.

You look up at your manager, who is still standing over your shoulder and clutching his business papers anxiously.

“No problem, boss!” you exclaim, and check in your changes without breaking eye contact.

“Whatever,” he mutters as he shakes his head and walks off, failing to grasp exactly how easy TypeScript, Jasmine, Chutzpah, and you have made his life just now.

Advertisements

The End of “Test-Driven” Development

In the not-too-distant future, we are going to finally stop hearing about Test-Driven Development. And by this, of course, I mean that the “Test-Driven” part of Development will just be assumed, because everybody will be writing code that way. In the meantime, you and your team can either be ahead of the curve or not.

Here’s why I’m so sure.

TDD takes less time.

You heard me. Less. But how could this be? Writing unit tests every time we write code basically means we have to produce twice as much code, right?

Yes, tests are additional code that must be written, usually in a one-to-one ratio with the production code. But to argue that twice the lines of code takes twice the time to produce is short-sighted–it only makes sense if you see your development team as the typing pool.

We do not type for a living. We define patterns in logic and data, and wire them together to build tools that people can use. And the fewer lines of code we use to do this, the better we are at our job. You don’t measure productivity by lines of code, do you? Then why would you estimate development work using the same broken thinking?

The measurable output of software development is functionality. Features. And features get done (and stay done) faster when we have confidence that the components involved are chewing gum and kicking butt. The test-first approach gives us this confidence.

It also, incidentally, yields less production code per feature. If you’re into that sort of thing. More on that in a second.

TDD keeps developers focused.

Do you want to talk productivity? Let’s talk productivity.

Every person who has ever gotten things done ever has had one essential tool in common: a to-do list. When we write tasks down, we are extracting them from the soggy, unreliable twerkfest in our head, and transforming them into an unflinching list of verbs. Only then can we stop treading water in an ocean of distractions and make a beeline for our next action.

If you have done TDD enough to warrant having an opinion on it at all, you know that writing a unit test before you’ve written the production code that makes it pass is exactly like writing something down on a to-do list and then doing it. Defining the next action in this way keeps us on task in the face of the deeply complex series of steps it often takes to implement a software feature.

Remember when I said that using TDD leads to less production code? This is due to the focusing effect that writing a test, and then making that test pass, has. We’re just oscillating between the red and green–there’s no step for gold-plated YAGNI in that workflow. Defining a new test every few minutes gets us in the habit of questioning, frequently, whether what we’re coding is actually called for by the feature.

TDD more than pays for itself.

The time required for functionally comparable development tasks is not linear over the life of a project. A rigor mortis sets in as the codebase grows, because every decision made by every nerd on every line of code narrows the number of choices that the next nerd can make. But if our code is protected from regression and organized well (say, by being wrapped in a cozy, cozy blanket of unit tests woven during development), this harsh curve flattens out.

Initially, while we are still learning and forming the TDD habit, development time may increase on the order of ten to twenty percent. Do not panic. This is usually where the developer (or manager of the micro- variety) gets nervous and start skipping the test-first thing, thinking that this will speed things along. We settle back into our old habits, entropy sets in, and any benefit that could have been had from TDD is lost.

But if we hang on tight through this dip and let the habit take hold, our estimates swing back to their normal size, and we notice that the only thing inflating them in the first place was a learning curve.

By keeping the code focused, protected, and consistent, TDD can drastically cut development, testing, and debugging time over the lifespan of an application. It is fragile and unmaintainable code, undetected bugs, and unforeseen design issues that cause a project to miss deadlines and blow budgets; not an up-front investment in quality and morale.

“TDD” will just become “D”.

tenaciousd-1349977935

[via]

When you get your head around the discipline of TDD, and the return on investment to be had from rocking it, it becomes difficult to see a future in which nerds and non-nerds alike don’t get behind it. Until then, teams of sad coders everywhere will keep chasing the short-term gains of Debug-Later Programming, emitting reams of sad, unsustainable code, and generally doing things that sad people do.

I mean, probably.

How Behavior-Driven Development Saved My Code from the Jedi Interruption

So, I’m not the sharpest sandwich in the drawer. This was demonstrated to me again in no uncertain terms just the other day, as I was working up some routing logic for a new Durandal-based app. The JavaScript in question went something like this.

router.guardRoute = function (instance, instruction) {
    var routeInfo = instruction.config;
    if (!_isAuthenticated && !routeInfo.settings.anonymous)
        return getAuthenticationUrl();
 
    return true;
};

It’s not an original story. guardRoute runs every time the user tries to navigate to a view in the app. This snippet simply ensures that unauthenticated users get redirected to a login page if the new route is not available to anonymous visitors. And it hummed along happily for a short while.

When Suddenly

Apparently, if the optional settings property is not present on routeInfo, an exception gets thrown, all of the air leaks out of our app, and the rotation of the Earth sort of grinds to a halt. Could be a bug.

No worries, thought I. We just needed to test route.settings before testing route.settings.anonymous.

if (!isAuthenticated && !route.settings && !route.settings.anonymous)

Hmm. That wasn’t quite right–what if we grouped those second two conditions together?

if (!isAuthenticated && !(route.settings && route.settings.anonymous))

That felt correct. But static analysis alone wasn’t enough to give me a warm fuzzy that my code was up to its arguably very important job.

Like I said, I’m not the brightest knife at the gun fight. I was standing on my head and counting on my fingers to ensure that this code was going to work. Maybe if this was the only thing I had going on at the time, perhaps if I was able to hone a laser-like beam of attention on the problem, I could have gotten to the point where I was confident in its correctness.

But, as it happened, at that moment my tête-à-tête with reinventing the authentication routing wheel took a back seat to making a peanut butter sandwich for this guy.

jacob

To clarify, I was working from home that day. I mean, I wouldn’t want you thinking that Aptera keeps the kind of office where tiny hungry children just sort of wander around in Jedi cosplay and dislodge people from their respective trains of thought.

Anyway. By this point I was spreading peanut butter on bread and spiraling hard into a full-on Hanselman phony complex on account of this gear just not catching in my head. What was wrong with me?

Enter BDD

As is often the case, the solution was there before the problem. I just needed to take a few minutes to wire up some Jasmine specs.

Who?

Let me back up. By now, you probably have heard all about the wonders of Test Driven Development (TDD). How it will make your code flawless, fix your marriage, and scratch that itch in the middle of your back that you can never quite get to. But you may not be familiar with TDD’s more stylish younger brother, BDD. That’d be Behavior Driven Development, if you’re not into the whole brevity thing.

The thing about traditional TDD is that it has a pretty fundamental flaw–it is backward. Backward, at least, in terms of its underlying metaphor of testing a thing. See, in meatspace, in order to run a test on something (like in a laboratory), it first needs to exist. So when we’re told to write a test for code that doesn’t yet exist, we buck; how can we write a test when we don’t even know what our code looks like yet? If TDD is something you’ve been meaning to get into, but you could never quite find the front door on it, this is probably why.

The behavior-driven approach resolves this cognitive dissonance, not by being mechanically any different at all from TDD, but by flipping the assumption that what we are writing first is a test. What we write first, says BDD, is a spec–a specification that defines exactly what the code under test will do. With this, our brains crack into applause–we write code from specifications all the time.

In execution, though, a spec is just a unit test that’s named well. Here’s how a typical test suite (using QUnit here) might come off in the naming department.

module("PB and J tests");
 
test("bread test", function(){     
    // assert stuff about the bread
});
 
test("peanut butter test", function(){     
    // assert stuff about the peanut butter
});
 
test("jelly test", function(){     
    // assert stuff about the jelly
});
 
// etc.

Make a barf noise here, amirite? Here’s how we’d name things using the aforementioned Jasmine, a BDD framework for testing JavaScript code.

describe("A peanut butter sandwich", function() {
    
    it("has two slices of bread", function () {          
        // assert stuff about the bread     
    });         
    
    it("has peanut butter on one slice", function () {        
        // assert stuff about the peanut butter    
    });
        
    it("has jelly on one slice", function () {          
        // assert stuff about the jelly    
    });

    // etc.
});

What we expect to have happen in our code is just so much clearer when we spell it out in English1. My very favorite thing about wording each spec like this is that Jasmine fuses the title from describe() with the phrase from each it() as subject and predicate. The output shown in the test runner then becomes a series of transitive statements about the code under test.

  • A peanut butter sandwich has two slices of bread.
  • A peanut butter sandwich has peanut butter on one slice.
  • A peanut butter sandwich has jelly on one slice.

And, of course, each of these comes with a red or green mark that shows us whether it is true yet.

The value of a good suite of BDD specs is that it is a direct English translation between our expectations for the code that we haven’t written yet, and the code that will test it to ensure that it meets the specification. The spec says what our code does. Our code does what the spec says. When we’ve made it all turn green, it works.

And here’s the part where I actually solved my problem

So cut back to the other day, post-peanut-butter-and-jelly-hiatus. I stopped banging my head directly against my troublesome code, opting instead to simply define, in plain English, what I wanted it to do.

describe("The auth service", function () {
});

A rousing start. Yessir, the auth service is in fact what I would like to be describing today.

describe("The auth service", function () {
 
    it("indicates when the user is authenticated", function () {
        var a = new auth(mocks.api);
        spyOn(mocks.api.login, 'GET').andReturn('jfazzaro@apterainc.com');
        a.start();
        expect(a.isAuthenticated()).toBe(true);
    });
 
    it("indicates when the user is not authenticated", function () {
        var a = new auth(mocks.api);
        spyOn(mocks.api.login, 'GET').andReturn(undefined);
        a.start();
        expect(a.isAuthenticated()).toBe(false);
    });
 
    var undefined;
    var mocks = {
        api: {
            login: {
                GET: function () { }
            }
        }
    };
});

In the first two specs above, I defined clearly what the isAuthenticated() method does, and since this was already implemented, they passed. But the real turning point was when it was time to spec out the service’s ability to determine whether a route was anonymous. I wrote the next two specifications as implementation-agnostic descriptions of what I was actually after.

it("indicates when a route is anonymous", function () {
});
 
it("indicates when a route is not anonymous", function () {
});

Once that was outside of my head and staring back at me in plain text, it occurred to me that that if block from the opening of this post–

if (!isAuthenticated && !(route.settings && route.settings.anonymous))

–yes, that one, was just trying too hard. The last two-thirds of it were an attempt to solve the anonymous route problem I had just finished specifying. And the fact that they weren’t isolated in their own named and testable function was what made it so difficult for me to quickly ascertain my code’s validity.

I descended on the keys with fresh purpose, spelling out the anonymous route checking functionality, as it should look like from the outside.

it("indicates when a route is anonymous", function () {
    var a = new auth(mocks.api);
    expect(a.isAnonymous(mocks.routes.signin)).toBe(true);
});
 
it("indicates when a route is not anonymous", function () {
    var a = new auth(mocks.api);
    expect(a.isAnonymous(mocks.routes.dashboard)).toBe(false);
    expect(a.isAnonymous(mocks.routes.simple)).toBe(false);
});
 
var mocks = {
    api: {
        login: {
            GET: function () { }
        }
    }, routes: {
        signin: { route: 'signin', moduleId: 'viewmodels/signin', settings: { anonymous: true } },
        dashboard: { route: '', moduleId: 'viewmodels/dashboard', settings: { anonymous: false } },
        simple: { route: 'simple/route', moduleId: 'viewmodels/simple/route' }
    }
};

Naturally, these tests went red in my test runner, because the isAnonymous(route) function didn’t exist yet. But it soon did.

function isAnonymous(route) {
    return route.settings && route.settings.anonymous;
}

Then, the tests went–red. Wait, still red? Yep.

auth-spec-fail

I was close, but I may never have picked out the distinction between undefined and false without that spec in place to guide me. I updated the function, making it a bit more specific in the truthy/falsy department.

function isAnonymous(route) {
    return (route.settings != undefined && route.settings.anonymous);
}

Boom. Greensville.

Finally, now that this irritating fruit fly of logic was labeled and tested, that code in guardRoute could now be read more organically, too.

if (!isAuthenticated && !isAnonymous(route))

When I read it aloud again in almost-English, I picked out a further subtlety: I really should test the route for non-anonymity first. I mean, if we’re hitting an anonymous route, our authentication status really doesn’t matter, and we shouldn’t waste any cycles checking on it.

if (!isAnonymous(route) && !isAuthenticated)

I like to imagine that Sam Clemens himself wouldn’t have put it more succinctly.

We think in English. The terse syntax of our favorite programming language becomes more compatible with the way we actually think when we use our words to specify and test our code. And offloading these micro-requirements from our heads into a self-testing suite of specifications doesn’t just make our software more readable and reliable, it also frees us up to focus on more important things.

Playing Jedi with the little guy, for instance.


  1. It’s a fair assumption that if you’re reading this in English, you’re thinking in English. If that’s not the case, by all means mentally drop your preference in whenever I mention it. Unless your preference is Klingon, in which case you must stop reading altogether and figure out where things went wrong for you.

Extension Cords, Tea Kettle Whistling, and the Contractor’s Wrap for Your Code

A few weekends ago, I found myself standing in my garage, regretting my ambitions of yard work and struggling to unwind the pile of orange spaghetti that was, at one point, a tidily wrapped fifty-foot extension cord.

Then I took a deep breath and reminded myself that things were going to be different this time.

See, the blog Art of Manliness had just run a post on how to wrap your extension cords like a contractor, using a technique that enables you to quickly uncoil whatever length of cord you need, without it ever becoming a tangled mess. The post was mostly pictures, so I managed to get through it. And things did, in fact, turn out differently this time.

cord
[via]

Well. The orange garage-spaghetti department of things, at least.

Cool Story, Bro

So what, you may well ask, does my high-voltage gardening adventure have to do with building software?

Here’s the thing about code: it gets inscrutably balled up just like this all the time. And while I’m sure that you, dear reader, have never yourself been responsible for such a mess, I have borne witness as this recurring nightmare slowly unfolded on many an unsuspecting software venture.

In the beginning of the project, there’s the familiar big talk about how This Time, we’re going to keep things organized. This Time, we’re going to write REUSABLE code. This Time, we’re going to work harder and do it right. Things are going to be different This Time.

reusable

So, development hops along at a decent pace at first–like winding the first few coils in that orange extension cord. But soon, it gets harder and harder to actually reuse the REUSABLE components. Changes take longer to implement, you swear that you can hear the faint whistle of a tea kettle that is perpetually beginning to boil, and your source has seemingly contorted itself into shapes that would make a sailor cry. And when the new devs (that management has hurled at the problem) climb aboard and try to unravel a piece of the code for what they’re working on, those kinks just get twisted tighter. Soon, everyone involved sinks into their chair with the same question burning in their brain.

How did this happen to our code again?

We get it, Fazzaro. Code bad, team sad. But what exactly can a young person like myself do to invoke the beneficence of the programming gods that they may no longer smite us in this way?

Glad you asked. Let’s get specific.

In my travels, the most common cause of egregious code fluster in the modern business app is a poorly scoped Entity Framework DbContext1. What might one of those look like? Stop me if you’ve heard this one: I’ve updated these entities, added two new ones over here, deleted that ugly one, and our REUSABLE code just called SaveChanges three times in the course of doing its business. So, I’m going to jiggle the handle and call it one more time over here for this case. You know, just to be sure.

Sure, the EF implements the Unit of Work and Repository patterns right out of the box. But this only helps if we’re using them correctly–which really comes down to understanding where the context belongs. Should a new context be created to persist each change to an entity? Should we create a new one for every method that alters data? Or should we just keep a global singleton context instance, and call SaveChanges on it when it feels right?

That’d be no, no, and hand over your keys, dude, respectively. Paths to the spaghetti side, those are. Instead, to keep the unit of work thing straight, I just keep two simple rules in mind:

  1. The unit of work does not go inside the reusable code, the reusable code goes inside of the unit of work.
  2. As soon as you create a new context or call SaveChanges, you are no longer writing the reusable code.

Some of your code will never be reusable, and that’s okay; that’s where the context goes. As for the rest of your code, here’s a way to make it truly and sustainably reusable.

I’m going to illustrate this by generally being abusive to a particularly troubling snippet of sample code from the Wingtip Toys tutorial on MSDN. In the the application that the tutorial guides you through building, the ShoppingCartActions class houses the domain logic around adding, updating, and removing items from a shopping cart. I’ve abbreviated the class here to just show the salient bits:

public class ShoppingCartActions {
    public string ShoppingCartId { get; set; }

    private ProductContext _db = new ProductContext();
 
    public const string CartSessionKey = "CartId";

    public void AddToCart(int id) {
        // Retrieve the product from the database. 
        ShoppingCartId = GetCartId();

        var cartItem = _db.ShoppingCartItems.SingleOrDefault(
            c => c.CartId == ShoppingCartId
            && c.ProductId == id);
        if (cartItem == null) {
        // Create a new cart item if no cart item exists. 
            cartItem = new CartItem {
                ItemId = Guid.NewGuid().ToString(),
                ProductId = id,
                CartId = ShoppingCartId,
                Product = _db.Products.SingleOrDefault(
                    p => p.ProductID == id),
                Quantity = 1,
                DateCreated = DateTime.Now
            };

            _db.ShoppingCartItems.Add(cartItem);
        } else {
            // If the item does exist in the cart, 
            // then add one to the quantity. 
            cartItem.Quantity++;
        }
        _db.SaveChanges();
    }

    public void UpdateShoppingCartDatabase(String cartId, ShoppingCartUpdates[] CartItemUpdates) {
        using (var db = new ProductContext()) {
            int CartItemCount = CartItemUpdates.Count();
            List<CartItem> myCart = GetCartItems();
            foreach (var cartItem in myCart) {
                // Iterate through all rows within shopping cart list 
                for (int i = 0; i < CartItemCount; i++) {
                    if (cartItem.Product.ProductID == CartItemUpdates[i].ProductId) {
                        if (CartItemUpdates[i].PurchaseQuantity < 1 || CartItemUpdates[i].RemoveItem == true) {
                            RemoveItem(cartId, cartItem.ProductId);
                        } else {
                            UpdateItem(cartId, cartItem.ProductId, CartItemUpdates[i].PurchaseQuantity);
                        }
                    }
                }
            }
        }
    }

    public void RemoveItem(string removeCartID, int removeProductID) {
        using (var db = new ProductContext()) {
            var myItem = (from c in db.ShoppingCartItems where c.CartId == removeCartID && c.Product.ProductID == removeProductID select c).FirstOrDefault();
            if (myItem != null) {
                db.ShoppingCartItems.Remove(myItem);
                db.SaveChanges();
            }
        }
    }

    public void UpdateItem(string updateCartID, int updateProductID, int quantity) {
        using (var db = new ProductContext()) {
            var myItem = (from c in db.ShoppingCartItems where c.CartId == updateCartID && c.Product.ProductID == updateProductID select c).FirstOrDefault();
            if (myItem != null) {
                myItem.Quantity = quantity;
                db.SaveChanges();
            }
        }
    }
}

And here are a few lines from a nearby code-behind class that consumes ShoppingCartActions:

ShoppingCartActions usersShoppingCart = new ShoppingCartActions();
usersShoppingCart.AddToCart(Convert.ToInt16(rawId));

Oh, it looks innocent enough, all right. Until you notice that not only does each instance of ShoppingCartActions get its very own private ProductContext, but that when it’s time to call RemoveItem, UpdateItem, or (Robert Cecil Martin help you) UpdateShoppingCartDatabase, we’re whipping off new context instances like they are going out of style. Gross.

Why is that gross? Well, suppose that we had another page in our application that needed to perform similar actions on a shopping cart, in a slightly alternate combination:

ShoppingCartActions usersShoppingCart = new ShoppingCartActions();
usersShoppingCart.AddToCart(Convert.ToInt16(rawId1));
usersShoppingCart.AddToCart(Convert.ToInt16(rawId2));

var changes = GetShoppingCartChanges();
usersShoppingCart.UpdateShoppingCartDatabase(cartId, changes);

I count no fewer than four new ProductContext instances created here–and easily many more, depending on how many changes were in that list we passed in to UpdateShoppingCartDatabase. And if we needed this particular set of operations to be transactional, so that if one failed, no changes were sent to the database at all? Forget it. Yes, gross.

And yet, with a simple, subtle refactoring, we can turn this heinous double sheetbend into a nimble little slipknot. All we have to do is get it to let go of control of the context:

public class ShoppingCartActions {
    public string ShoppingCartId { get; set; }

    private ProductContext _db;

    public ShoppingCartActions(ProductContext db) {
        _db = db;
    }

    public void AddToCart(int id) {
        // Retrieve the product from the database. 
        ShoppingCartId = GetCartId();

        var cartItem = _db.ShoppingCartItems.SingleOrDefault(
            c => c.CartId == ShoppingCartId
            && c.ProductId == id);
        if (cartItem == null) {
            // Create a new cart item if no cart item exists. 
            cartItem = new CartItem {
                ItemId = Guid.NewGuid().ToString(),
                ProductId = id,
                CartId = ShoppingCartId,
                Product = _db.Products.SingleOrDefault(
                p => p.ProductID == id),
                Quantity = 1,
                DateCreated = DateTime.Now
            };

            _db.ShoppingCartItems.Add(cartItem);
        } else {
            // If the item does exist in the cart, 
            // then add one to the quantity. 
            cartItem.Quantity++;
        }
    }

    public void UpdateShoppingCartDatabase(String cartId, ShoppingCartUpdates[] CartItemUpdates) {
        int CartItemCount = CartItemUpdates.Count();
        List<CartItem> myCart = GetCartItems();
        foreach (var cartItem in myCart) {
            // Iterate through all rows within shopping cart list 
            for (int i = 0; i < CartItemCount; i++) {
                if (cartItem.Product.ProductID == CartItemUpdates[i].ProductId) {
                    if (CartItemUpdates[i].PurchaseQuantity < 1 || CartItemUpdates[i].RemoveItem == true) {
                        RemoveItem(cartId, cartItem.ProductId);
                    } else {
                        UpdateItem(cartId, cartItem.ProductId, CartItemUpdates[i].PurchaseQuantity);
                    }
                }
            }
        }
    }

    public void RemoveItem(string removeCartID, int removeProductID) {
        var myItem = (from c in _db.ShoppingCartItems
            where c.CartId == removeCartID && c.Product.ProductID == removeProductID
            select c).FirstOrDefault();
        if (myItem != null)
            _db.ShoppingCartItems.Remove(myItem);
    }

    public void UpdateItem(string updateCartID, int updateProductID, int quantity) {
        var myItem = (from c in _db.ShoppingCartItems
            where c.CartId == updateCartID && c.Product.ProductID == updateProductID
            select c).FirstOrDefault();
        if (myItem != null)
            myItem.Quantity = quantity;
    }
}

You almost can’t see the difference–but note the complete lack of context instantiation, and that nowhere in there do we even consider calling SaveChanges. This is our rule number two in action; ShoppingCartActions just isn’t in the business of owning the ProductContext anymore.

Here’s how we can leverage this class in our code-behind now:

using (var db = new ProductContext()) {

    ShoppingCartActions usersShoppingCart = new ShoppingCartActions(db);
    usersShoppingCart.AddToCart(Convert.ToInt16(rawId1));
    usersShoppingCart.AddToCart(Convert.ToInt16(rawId2));
    var changes = GetShoppingCartChanges();
    usersShoppingCart.UpdateShoppingCartDatabase(cartId, changes);

    db.SaveChanges(); 
}

And that illustrates rule number one. We instantiate one ProductContext, we do our business with it, and we send a single set of changes to the database. Not only can the new ShoppingCartActions be applied to many different situations throughout the app, but it will scale like a mother to boot.

Dependency Injection is the Contractor’s Wrap for Code

By now, the sharper nerds in my readership will be calling out that I have simply invoked the Dependency Injection technique here, and they would be correct. We took a component that the logic was dependent upon and made sure that it came from somewhere else, outside of the logic’s implementation. Dependencies like context instances are really just input for the logic to use in its calculations. What’s really going to knock your socks off later on is that this pattern of threading dependencies through class constructors applies to any number of types that have nothing at all to do with the Entity Framework, or even data storage, for that matter. HttpContext, FileStream, SPWeb, RouteCollection, you name it. They are all just input for our code to operate upon.

With just a minor tweak to the way we weave our classes together, our source can become tangle-free and flexible, and the next developer to lay hands on our app doesn’t have to worry about it curling up into an untenable heap of sad. It’s a shot in the arm for the sustainability and success of the app, and for the happiness of the team. And things are totally going to be different this time.

Well. The weeping-sailor heinous-code-fluster department of things, at least.


  1. Okay, so maybe you’re not using the Entity Framework (or its newer DbContext API), but don’t check out on me just yet–you might well be rocking a similarly steaming pile with some other ORM/unit-of-work-type implementation. Keep playing along at home, mentally adjusting my terminology to whatever busted situation you’re holding onto over there. I’ll make it worth your while.

How to Approximate Application_Start in SharePoint Like a Gentleman

Sometimes, you just need to start something.

application_start_in_sharepoint

In a typical ASP.NET application, if we need to start something when it cranks to life in IIS, we simply add some custom code to the trusty Application_Start handler in our Global.asax file:

public class OmgAwesomeApplication : System.Web.HttpApplication {
    protected void Application_Start() {
        // omg something awesome here
    }
}

In a SharePoint app, however1, there is no Global.asaxfile, and indeed no such trusty method. This is by design, of course–running custom code inside SharePoint means relinquishing some control, especially over shared territory like events in the HttpApplication‘s lifecycle.

Still. Sometimes, you just need to start something.

Good place to start, Init?

So how can we hook into the start of an application in a SharePoint context? Rather than waiting for a leg up from our Visual Studio project template, let’s use the old rusted side door in ASP.NET that takes us straight into the Application Start kitchen.

An HTTP Module is simply code that is configured to run in IIS in response to requests on the server. The IHttpModule interface provides one method named Init. On-label usage dictates that we use this method to subscribe to the given HttpApplication‘s BeginRequest and EndRequest events:

public void Init(HttpApplication application) {
    application.BeginRequest +=
        (new EventHandler(this.Application_BeginRequest));
    application.EndRequest +=
        (new EventHandler(this.Application_EndRequest));
}

Now clear your mind, and forget about subscribing to these events. For our purposes here, we just want to run some code one time (and one time only) when the HttpApplication starts up. And it just so happens that our Init function runs right when the HttpApplication starts up. So:

public abstract class ApplicationStartHandler : IHttpModule {
    protected abstract void OnStart();
    public void Init(HttpApplication application) {
        OnStart();
    }
    public void Dispose() { }
}

Notice that I’ve built this out as an abstract class. This is not just because I am a gentleman. It’s because I want to encapsulate the pattern of running our yet-to-be-written custom code in the Init function. It’s also because I want to name the pattern in a way that makes it readable and understandable. Plus? I’m fancy.

If you don’t already, you’re really going to appreciate this fanciness/gentlemanliness in a few minutes.

Pulling our socks up and moving on, let’s inherit ApplicationStartHandler, and override the OnStart method to do something interesting (like perhaps boot up a ServiceStack host):

public class ServiceStackStarter : ApplicationStartHandler {
    protected override void OnStart() {
        var host = new MyAwesomeAppHost();
        host.Init();
    }
}

One last step before our fanciness is ready for prime time: we must let IIS know that our module exists. We do this by registering it in the Web Application’s web.config file, under configuration/system.webServer/modules.

<add name="ServiceStackStarter" type="Fazzaro.Blog.ServiceStackStarter, Fazzaro.Blog, Version=1.0.0.0, Culture=neutral, PublicKeyToken=544b79730baa4957" />

As you can see, this requires skill in the realm of obtaining the PublicKeyToken of your module’s assembly. Details on this yak-shavery can be found here.

And once that’s in? Cut, print. Golly, was that ever a piece of 1337 SharePoint hax0r cake2.

Um. Hey, why is the server room on fire?

Multithreading, Mutual Exclusion, and the Advantages of Having Been a Gentleman

Yeah. IIS is a web server. And when we inject code into its very guts like this, we have to account for the fact that multiple instances of said code are being run at any given time.

To pull off the fix for our original cavalier implementation, we must add a static flag and a mutex to the ApplicationStartHandler class, using them in combination to ensure that the OnStart method is run once (and only once):

public abstract class ApplicationStartHandler : IHttpModule { 

    private readonly static object _mutex = new object(); 
    private static bool _initialized = false; 

    protected abstract void OnStart(); 

    public void Init(HttpApplication application) { 
        if (!_initialized) 
            lock (_mutex) 
            if (!_initialized) 
                Application_Start(); 
    }

    private void Application_Start() { 
        _initialized = true; 
        OnStart(); 
    }

    public void Dispose() { } 
}

Our OnStart method will now indeed run once and only once when the HttpApplication starts up, with no Global.asax in sight. And because we were gentlemen about the whole abstracting thing, our consuming code (in this case, the ServiceStackStarter class above) need not change in response to this fix.

This isn’t the first time I’ve noted here that standing on the shoulders of giants sometimes means leaving one of your favorite tools down on the ground. The trick is to stop being so shy about asking the giant to hand you a screwdriver.


  1. I’ve really got to stop beginning all of my sentences like this.
  2. Mmm. 1337 SharePoint hax0r cake.

Kent Beck’s Mother Doesn’t Work Here, and Neither Does Yours

popmechanics1950

Way back in 1997, when it was still just fine for a dude to rock a shiny bowling shirt and bleach the crest of his bangs, Kent Beck identified a three-step process for writing great code. Likely a truer and more useful thing to perhaps have tattooed on one’s self than ninety percent of the other ink permanently installed during the Clinton administration, it went a little something like this:

  1. Make it work
  2. Make it right
  3. Make it fast

Sakes, does that ever roll off the tongue and come off punchy in a meeting. But, just so we’re all running our reader finger along the same page with regard to what each of those amount to:

  • Make it work means that the tests pass and the software functions as advertised.
  • Make it right means that the code has been refactored into something that you love so much that you wish there was a Clean Code Magazine and that they could come and interview you about it. You know. DRY, SOLID, readable and maintainable.
  • Make it fast means that the code is tuned for performance.

“Yeah, Jon, so, yeah. That’s fine for him and his pals in their Ivory Tower where they pair and write interfaces and invert dependencies and eat ice cream all day not that I’m angry. But our boss won’t let us do steps two and three. What does Kent Beck say about that?”

Well, I can’t speak for Beck. But I’d say we just found out why your code sucks.

Also: it is totally your fault.

Here’s why.

It’s easy to assume that these steps are supposed to happen at the lifecycle scope of a system, like phases in a project. As in:

  1. First, we spend six months getting the application to work.
  2. Then we go through the code with a slide rule, put some fresh tape on its glasses, and inflict a neckbeard wish-list that results in absolutely zero changes apparent to the user. If we’ve done it right.
  3. Finally, we go on stopwatch safari, hunting down for-each loops and parallelifying them. Also? Indexes.

And at long last the user has the very same feature (and bug) set that they’ve had for a year, except now all the nerds are happy and the budget is blown. Fowler be praised.

I suppose it’s possible to sell this approach to the patron of your sundry bracketed text files and persuade them to loosen their purse strings for months on end with nothing to show for it but QUALITY CODE. But it’s not likely. Nor should we expect anything of the sort—in fact, it’s bad business, and if you’re working for someone who is kosher with that schedule, they are a fool with whom said purse will soon be parted.

Don’t ask, don’t tell, and the hidden fourth step

In any case, from the non-nerd perspective, there is exactly one step to writing great code:

  1. Make it work

Make it right and Make it fast have nothing to do with anyone who isn’t slinging code. Heed another Clinton-era gem here: don’t ask, don’t tell. Trust me, you’re not Daniel Day-Lewis, and the users don’t care about your process. Steps two and three are assumed, rolled up into step one–of course it’s going to be right and fast.

And this leads us directly to how we can implement Beck’s method in the real world.

See, there’s a hidden fourth step that will help us remember where the first three belong. That fourth step? Check in the code.

That’s right. The three “Make it“s are not great hairy brushes with which to paint the entire codebase, one after the other. Rather, they represent an iterative approach to crafting each individual feature, before we can declare it done. So really, it’s:

  1. Make the feature work
  2. Make the feature right
  3. Make the feature fast
  4. Commit the code

Oh, and

  1. Tell the boss that you made the feature work

Our mother doesn’t work here, you guys. But if we make tidying up after our own sloppy click handlers a part of what we do every day, we can have our cake and refactor it too.

 

 

Mmm. Refactored cake.

Stop Being So Precious With Your Stash

If you have been anywhere and done anything in our line of work, you have a stash. Maybe it’s a .cs file or two that you drop into a project when you need them. Perhaps you’ve gone to the trouble of packaging it up in a DLL. But somewhere, you’ve got a stash of code that is so dadburned universally useful that you sprinkle it like a pinch of nerd salt onto every project you touch.

So. If it’s that universally useful, why aren’t you sharing it?

I Hope You Brought Enough Nerd For Everyone

“Nobody paid me for this code”, you say. “Why should I share it for free?”

I get it. I mean, that’s your code, after all, isn’t it?

smeagol

Tit for tat. Quid pro quo. A day’s work for a day’s pay. Time is money. These adages of our parents are still totally relevant in the day to day machinations of modern business. But they don’t allow for progress–no one has ever moved the ball forward by sitting on it.

What I am getting at, of course, is open source. Yes, Open-flipping-Source, those words that make many devs and IT managers of a certain generation and platform preference slightly incontinent with uncertainty. Yet the nerds who do share their ‘ware on sites like GitHub and CodePlex raise a tide that lifts all of our boats.

I’m not asking for you to spend your witching hours bathed in an icy display glow, birthing software that blows the dorito crumbs out of every neckbeard on app.net. And I’m definitely not telling you to share your boss’s/client’s intellectual property, that’s wrong.

What I am asking you to do, though, is to dig up that file/library/plugin/thing that you are being so precious about, install GitHub for Windows, and share it with your fellow monos de código. Throw some spitshine on that useful little bauble, comment it up (or not) and hand it to the world. Yes, I know the world did not pay you for it. They never will–your day’s pay for that code is a sunk cost. But it’s going to help someone, somewhere, who has had the same universal problem to solve.

That’s Hippie Talk

All right. Let’s say you’re in it for you. You take the last of the coffee without making a fresh pot, and you sure as Stroustrup don’t care about giving your fellow developer a high-watered leg up. It’s on them to work it out for themselves, right? That’s why they pay you the big bucks, right? Pull yourself up by your own bootstraps, right?

Open sourcing your code can be pretty self-serving, too, if you’re into that sort of thing. Your project gets a permalink with your good name on it, and that becomes a part of your de facto résumé. So, even if you are only in it for number one, opening some of your code will pay dividends the next time you are looking for work, and someone you want to impress googles your good name.