Monday, April 13, 2009

Whatever it takes

(Well almost.)

Pawel Brodzinsk over at Software Project Management wrote a blog about why developers should work on crappy machines. To sum the article up, developers should use crappy machines to force good programming practices on them, and to some extent, good usability. I am sure that most of the reasons behind the post are related to the frustration users feel for having to deal with bloated, slow performing software, or software that doesn't display right on lower end resolutions; and in that, I wholehearted agree.

However, where Pawel and I differ is how we should deal with that. There are two main complaints in the article,
  1. Bloated memory requirements of many applications
  2. Usability of applications on smaller screen resolution sizes
One of my most important missions as a project manager at Macadamian is to make sure that my developers can work as fast as possible. I don't mean to hack out code without regard for technical debt, but that their tools and the processes don't slow them down. Yes process is necessary, but the right amount of process can make the difference between highly motivated, self-organizing teams able to efficiently collaborate between distributed locations, and teams that are swimming in quicksand unable to make a release to save their lives, or their companies.

Their tools need to be top notch, their development machines are dual or quad cores, 4 gigs of RAM, big and fast hard drives, and dual monitors. And yes, some developers still complain about the setup :) We also invest in tools like IncrediBuild to build code faster, and IT infrastructure that can allow you to copy large amounts of data across the network as fast as possible.

I don't want my very talented, and very expensive developers wasting time on waiting for projects to build, new versions to push to production, or code to checkout. If a fresh build takes much more than a 15 minutes to build, you have just lost thirty or more minutes to a game of foozball, a Starbucks run, or a YouTube foray.

So how do you deal with such things like usability, and performance requirements? Simple, engage in formal usability work and create formal requirements on performance that are monitored and enforced by your testing team.

A proper usability team cannot only design UI that can scale to multiple resolutions, but is actually usable by the end user. They can apply actual scientific research to solve usability problems. And you will get the added benefit of having wire frames that your developers will be able to rapidly develop with less iteration and confusion. A picture is worth a thousand words!

This leaves only performance requirements that need to be tackled. Developers can follow requirements (no really they can!), especially when monitored by your testing team. The memory and processor usage, install size, and overall performance characteristics can be defined in the project requirements. The testing team can then apply tools and process to ensure that these are met, and log bugs if they are not.

An excellent testing team takes responsibility for the overall quality of the application, they don't just run test cases. This will mean they will ask questions like "All I did was log into the application and it is using 123 megs of RAM. Why is it so high?". Your testing team does look for memory and other resource leaks as part of their testing don't they?

2 comments:

Pawel Brodzinski said...

Jason,

I agree that there should be usability team which makes sure an application works well on users' machines (whatever a "user machine" means). I agree there should non-functional requirements which sets how much resources on end-user's machine an application can use.

Except it doesn't work that way.

I don't rant about some crappy app which no one of my reader heard of. I'm talking about applications used by a half of the world. Is something completely wrong with their QA teams?

There's one more thing you treat a bit too idealistic: "Their tools need to be top notch."

I received several comments on the post from developers pointing the problem is located much within development tools they use. IDEs are pretty crappy in terms of resources usage (I don't rant about minimal screen resolution here since developers usually works on high-res screens). And they generate a lot of crappy code in terms of resources usage. As an effect we have pretty crappy applications in terms of resources usage since as far as you don't want to re-write all libraries by yourself you leave resource management to your crappy IDE and crappy libraries.

And at the end of the day you get Firefox which eats 500 megs of RAM.

By the way my solution is given with a tongue in my cheek - I mostly wanted to cause a stir and it appears I was successful.

Jason Mawdsley said...

Pawel,

I don't know if there is anything wrong with their QA teams. Perhaps there weren't any requirements on resource usage.

I totally agree that the mentality at all levels is that hardware is cheaper then developers. It is cheaper, and quicker to develop without regard for resource usage.

I know your solution was tongue in cheek, or at least half expected it to be :) But often their is some truth to humour, and I have heard that school of thought before.

Regarding Firefox, I am sure that there are some memory leaks, however one of the biggest issues is that Firefox caches many of the pages you last visited to speed up the performance of the back and forward buttons.