Sunday, January 28, 2007

Coding on Self-Destruct

The title of this post might lead you to believe that I'm going to write about bad coding practices. Depending on your point of view, you might be right. If you're a product or project manager, that might be your position. If you're a developer, it might not. But I'm going to ramble for a bit about my personal approach to software development.

I basically have two philosophies when it comes to software development:

  • A software developer's job is not to write software. It is to service the customer. The fact that he writes software to do it is entirely coincidental.
  • In order to do his job, a software developer must write code that is so well written and so well documented that he or she can be replaced at any time without impacting the project (or, by extension, the customer). 

You may think that the second one is a lofty goal. But a goal should be lofty; it's something to aim for. If it was easily attainable, anyone could do it.

Basically, it comes down to this: I'm obsessive about software quality. Not just coding, but the entire process: everything from the initial requirements gathering to the delivery of the product. I have to be; where I work, I'm a one man software development shop. So the attitude that I have to take when it comes down to it is that my job is to put myself out of work. I have to do work that is of such high quality that I am eventually no longer needed. That is what I mean by coding on self-destruct. So my personal promise to the customer is to deliver high quality software to the customer on time within budget.

When I prepare a user guide, I know that real people have to use it. They have to be able to read it. I know how frustrated I get when I pick up some piece of software and it refers me to the user guide and its some poorly slapped together rich text file or help file that isn't properly indexed or cross-referenced. So I take the time to make sure that the user guide is up to date, has  table of contents and an index, that it has full coverage of the software, lots of illustrations and how-to guides, and everything the users might need to know. I use plain English, not a bunch of mumbo job that only tech geeks would enjoy. The users are not developers. The same thing holds true for the requirements specifications and test plans. You have to know your target audience and use the language appropriate for them.

(Right now, however, there seems to be a big push in the industry to move away from what they are scathingly referring to as BUFD (Big Up-front Design). I take issue with the idea that we shouldn't invest large amounts of time up front with the customer to determine their needs and wants. I think that we're treading on thin ice when we slap a system together and foist it upon them without sufficiently planning it out; I don't know that we need to invest up to a year in analysis and design, but I think that we need to spend enough time in analysis and design to ensure that the system's scope doesn't grow out of control, and to ensure that everyone knows what the system is supposed to be and what it isn't. The costs associated with ripping out unwanted features and adding new ones that weren't identified due to a lack of unwanted planning are enormous. Every time you do that, you introduce the risk of requiring full regression testing, system down time for redeployment, reprinting of the manuals, and retraining of the users. It's expensive. Adequate planning can mitigate those costs.)

When I write the code, I write it for developers. But even then, I don't have any idea what the experience level of my successor will be. Will he or she have my level of experience? If not, they're going to have a tough time picking up my code and maintaining it unless I make it pretty darned easy to understand and maintain. That means self-documenting code, using a standard naming convention, a consistent coding model, and thorough use of comments.

If I get hit by a bus, the company can't afford to have the project come to a stand-still for six months while someone tries to learn what the heck I was doing. It's my responsibility to mitigate the amount of time it takes my successor to get up to speed. So I do that. And any responsible programmer should be doing that as well.

The payoff isn't just for my successor. It benefits me as well. I find that my own code is far easier to understand and maintain as I go back in to make defect corrections and add new features or remove those that have become obsolete. It takes me far less time to understand what a particular piece of code is doing and why if I've coded it consistently and commented it, than if if I haven't.

Everything that is required to build or deploy the software is stored in the source code repository: source code files, images, build files, batch files that prepare the build, SQL scripts, etc. I also store the release notes (Word documents), the user guide, test plan, requirements documents, and related documentation in it. It's a myth that you can't store these files in a source code repository. They're just binary files, and you'll get full version control on them.

I am always looking for ways to improve the visibility of the project. High project visibility gives the stakeholders a sense of involvement; they don't feel like they're sinking lots of money into a black hole and just hoping that something will come out in the end. To that end, constant communication with them is vital. They must always feel like their input is important; after all, it's their baby, and whatever I'm producing is for them, not me. They need to know that it's alive and kicking, that it's growing, a living breathing thing. Emails, phone calls, conference calls, a project web site, and on-site visits with demonstrations of the product go a long way towards keeping them abreast of its state. The customer's sense of involvement is an important part of the software's quality. They provide important feedback throughout the development of the product that will prevent me from making potentially costly mistakes in the design of the product that weren't caught in the initial analysis. Further, their review of the system might alert them to needs that they weren't aware of earlier--some of them critical, and some that can be slated for future releases.

We use a Web-based defect tracking system so that our customers can report defects as they find them. We categorize them and tackle critical defects first. Critical defects are those that result in a system crash or data corruption. After that we deal with high-priority defects: those that generate error messages. Next come medium-priority defects: features that don't generate error messages, but don't behave the way they should, but have workarounds. Then it's Low-priority defects: cosmetic issues, such as font problems, spelling errors, and so forth. Feature requests are an entirely different category.

Our goal is to have zero defects in the database at all times. If I find a defect that isn't in the database, I report it, and then I strive to address it in the current release on the development server. The smallest defects irritate me. Some folks don't understand why. But they do.  When that software goes out the door, it's something that I made, the work of my hands. It bothers me when I know that it's going out there with a defect in it. And it's funny, because I know that there's no such thing as defect-free software.

Still, I am reminded of a passage from David Eddings' book, Pawn of Prophecy, in which the young Garion asks Durnik the smith why he bothered to fix a broken part on a wagon. It was in a location that no one would ever see. No one would know it was there, Garion had told him. "But I'll know," Durnik had replied.

And that's how I feel about defects. No one else will know these little defects are there. But I do. They may never crop up, but I'll know they're there. And it's my job to get rid of them. I will never be satisfied until every last one of them is gone. Because in my eyes, my promise to the customer, to deliver high-quality software on time and within budget, hasn't been met until those defects are gone.

A high quality application, whether it runs in a browser or on the desktop, needs to have an interface that is clean, consistent, visually appealing, and easy to use. It should provide lots of visual cues to the user about the task at hand. It shouldn't leave them guessing about what they're doing. If it's a data entry form, it should provide plenty of immediate data validation that helps them to enter good data rather than fights them in their efforts to do so. It should never make it easy for them to lose their work. ("You are about to discard your changes. Are you sure you want to do this?") It should use plain English or language from their particular domain to describe the tasks at hand, and not technical jargon.

An application should never assume that the user's vision is as good as the developer's or the designer's. Too many applications out there rely solely on color to distinguish between records, forgetting about colorblind users. Or they use very small fonts or window/page sizes, forgetting about users that run their screens at very low screen resolutions because they have poor vision. This only creates an unwieldy application that requires lots of scrolling for users of Web applications, or the use of obscure key commands to manipulate windows and get them to move around the screen.

I can't cover everything in one post. But I think you get the point. Software quality covers the full spectrum of the development process. There's room for improvement in the entire process. And improving it is an iterative process. You do it, then you review your process, and seek to improve it. With each iteration, you get a little better at it.

I don't write software for a living. I work to ensure that my customers can get their jobs done as quickly as possible, and with a minimal amount of hassle. It is purely a coincidence that tools of my trade happen to be a compiler and a computer. All of these processes that I have described are merely ways in which I strive to ensure that my customers are happy, so that I can eventually walk away from that project, knowing that they don't need me anymore, that the project will run just fine without me.

When that day gets here, then I'll know my job is done.

Thursday, January 25, 2007

Debugging JavaScript in Visual Studio.NET

I've often been frustrated by the difficulty of testing client-side script in my .NET Web applications. So, being the Google-savvy user that I am, I set out to find a solution and stumbled across Walt Ritscher's post, which I will shamelessly quote here, because I like to have this kind of information handy:

  1. Enable client-side script debugging in Internet Explorer
    1. Open Microsoft Internet Explorer.
    2. On the Tools menu, click Internet Options.
    3. On the Advanced tab, locate the Browsing section, and uncheck the Disable script debugging check box, and then click OK.
    4. Close Internet Explorer.
  2. In your JavasSript function add the keyword debugger . This causes VS.NET to switch to debug mode when it runs that line.
  3. Run your ASP.Net application in debug mode.

I've enthusiastically tested this little tidbit, and determined that it works exactly as he describes. Thanks, Walt! This tip is a life-saver.

However, I ran into an interesting problem: when I place the debugger keyword inside an .aspx file, Visual Studio .NET loads the wrong page into the debugger. Instead of stepping through script code, I'm stepping through HTML. It's quite peculiar. If I remove the debugger keyword from inline-script (that is, script that occurs between <SCRIPT> and </SCRIPT> tags in the page itself) and put it inside an included script file, everything works fine.

Apparently, Visual Studio is having some difficulty with this sort of thing. The solution, of course, is to write a simple JavaScript include file that invokes the debugger, and include it whenever I want to invoke it. It's a simple (and yet inconvenient) workaround. It also complicates my build process, since it's one more file I have to make sure I remove from the shipped product.

But, I am able to debug script, and that's a godsend in and of itself. Just being able to step through JavaScript code, and watch the variables in the local window is more than enough to make up for the hassles of an include file and an additional line in my build script.