Thursday, June 7, 2007

Finally, Someone Dares to Say it

From an article highlighted on Digg:

Intelligent extra-terrestrials almost certainly exist on distant planets beyond our solar system, leading British astronomers told the government yesterday.The scientists expect that the first evidence of primitive alien life, such as microbes and vegetation, will emerge within 10 years, with more substantial finds following future space missions.

I can't believe that anyone with a brain larger than the size of a peanut believes that in a universe the size of ours, we're the only planet with life on it. Statistically speaking, the chances are astronomically remote that we're the only planet that harbors life. (Yeah, that's a cruddy choice of words. Sue me later.) It would be the height of human arrogance to think that our world is the pinnacle of evolution throughout the known universe, and to place ourselves at the top of the evolutionary food chain.

But hey, what do you expect from a bunch of talking apes? (Especially a world where the biggest, loudest, most idiotic chimp of all was "elected" the ruler of the most powerful tribe of apes.)

Anyway, when you take into account the size of the universe, and the number of worlds that we're starting to find, the chances of finding one with primitive life begin to rapidly increase. Every time we find a world with primitive life, the chances of finding one with more evolved life forms similarly increases. The further away from our world we move, the greater the chances of finding one with life forms similar to or more advanced than our own.

Does that mean they'll come zipping across the interstellar spaces in vast fleets of gleaming starcrafts to invade or embrace us? Hell no. They'll be just as constrained by the laws of physics as we are. But they're out there. Somewhere.

In my mind, the laws of statistics and probability are every bit as valid as the laws of physics.

read more | digg story

Wednesday, June 6, 2007

Dew is to Water As Want is to Need

On today's Coding Horror entry, Jeff Atwood brings up a really interesting point about the power of observing users versus asking them. Paraphrased, what users actually want is typically not what they think they want or tell you they want.

It's funny and sad because it's true.

It took me a long time to understand the difference between want and need. I may want a nice, tall fizzy bottle of Mountain Dew, but my body needs water. It doesn't need Mountain Dew. Sure, Mountain Dew tastes better, and I like the fizz, and I look way more cool when I'm holding it, but I don't need it. I need water to hydrate my body and keep me alive. There's a big difference.

Similarly, when users tell you they want a piece of software that does X, Y, and Z, what they usually need is something that does A and B. (Usually, A and B are something on the order of "It works well" and "It doesn't corrupt my data.")

Nonetheless, trying to get users to tell you what they need is akin to extracting molars from a chicken. It's nearly impossible. They'll give you something like this:

  • It has to look really good. You know, like SILF. (Software I'd Like to @#$%)
  • It has to be fast. Really fast. Like, it has to be so fast that I get whiplash when it starts up.
  • It can't hoard memory. Cuz I'm using Windows 98. In fact, can you make it use no memory at all?
  • It has to be secure. Really secure. Like, Fort Knox secure. Oh, but I want to be able to pass it around on the Internet and share it with all my friends. Or on a USB drive. Or whatever. Ooh! BitTorrent!
  • It has to be a Web app to. With Flash. In fact, do it all in Flash. But I have to be able to use it on my cell phone. And on my XBox. And FireFox. FireFox totally pwnz Micro$oft.
  • Everything has to be done in my company's colors: Black and brown. I want all the text in this really cool dark brown color, and the background all has to be black! It's bitchin'! And flames everywhere! And I have this cool soundtrack I want to play throughout the whole thing! And every button should be a different color, and they should make machine gun sounds when you click them! And then explode with a giant fireball!
  • I've only got a budget of $500.

Oh, God. The horror. The humanity!

While this list is obviously heavily laden with hyperbole, it's not too far from the truth. A completely user-driven set of application specifications, expressing their wants would likely provide nothing that they needed. At some point, you have to realize that users are, in fact, dreaming about toys.

In the end, software is about enabling users to get their work done faster and easier with a minimal amount of hassle. But users don't know that. Largely, they think it's about fun. Not all software is a first-person shooter or a massively multiplayer online role-playing game. (Sadly.) As a member of a software team, it's our job to identify their needs, and create software that meets those needs without getting sidetracked.

If you're starting a new system from scratch, don't waste your time asking the users what they want. Ask them what they need. Explain the difference to them. (Start by making sure you understand it yourself.) Explain that everything that isn't needed that is added to a product incurs additional cost and delays the delivery of the product, and that someone has to pay for it.

Build systems into your products to monitor which features are being used the most. Don't trust users to tell you. They aren't thinking about it. This kind of information is very useful for determining where to spend your time improving the product. It's also useful for identifying features that you think are critical (in business applications, especially) that should be used, but aren't being used. You can figure out why they aren't being used and target those areas for resolution.

If you write a solid subsystem like that for your software, you won't have to rely on user feedback, which isn't always reliable. A solid monitoring system will not lie to you. And that information will help you make much more intelligent decisions in the future. It's a question of silently observing the users as they actively use the product, instead of asking them about it after they've done so. All of us have a pretty short attention span when it comes to software use. I have no idea what the most used commands are in any given software package based on my own usage scenarios. For that reason, you couldn't ask me to tell you what commands or features I use and get a meaningful answer from me.

So think about whether or not you need it. If you do, invest the time to do it. But don't do it because you want it.

Dew? Or water?

Tuesday, April 24, 2007

To Flash, or Not to Flash

Considering the design of an all-Flash website? Struggling with the difficult choices involved? Risks got you stumped? Consult this handy chart to help you out.

It'll vastly simplify your decision making process!

Tuesday, April 10, 2007

Yet Another Amateur Opinion

So, here I am, reading The Braidy Tester, where I frequently lurk, because, well, I like the way the guy thinks. And recently, he posted an article that I agreed with so I finally mustered up the courage to kick down the closet door and post a response.

The basic premise of the article (which you can find here) was that we should occasionally entertain the notion of organizing project teams around features instead of skill sets. I agreed. In my response, I said:

Going against the grain is scary but sometimes vital in our industry. It takes courage, and always involves risk. But calculated risk isn't always a bad thing.

I'm not a fan of blindly adhering to "established best practices," "the Next Big Methodology (TM)," or established team models. Just because something is a best practice at Acme Corporation doesn't mean it's a best practice at Real World Inc. The business model is different, the staffing patterns are different, the problem domains are different, *everything* is different. You have to be willing, at some point, to accept he idea that a best practice for *them* isn't necessarily a best practice for *you.*

Be flexible. Do something different. Experiment. Find out what actually works, and makes you more effective. Make the leap. At the end of it all, if you find out that it didn't work, you'll at least come out of the experiment knowing something that you didn't know before. And the acquisition of knowledge is never a wasted effort.

(Note that that's not a vanity quote. It's done for clarity.)

The operative words in that post are "blindly" and "calculated." However, a subsequent poster referred to my opinion as "amateur."

Pecker-waving aside, how does one define an "amateur opinion"? I doubt very highly that the poster has any idea what my level of experience is. I also doubt that he carefully read my post, and skipped right over the key words, blindly and calculated.

For whatever it's worth, I'm always willing to accept the fact that there are an innumerable number of people out there that know far more than I do. They have scads more experience than I do. But the last time I checked, you didn't have to have a license or a degree to have an opinion.

So, despite the fact that I apparently lack the credentials to post an opinion, I'll simply refer the reader to the 1st Amendment. And then I'll ask the reader to read what I said, and then think about what I said, and then ask about my experience before labeling me an amateur.

I assure you, sir, I am not. I am familiar with the phenomena that have led me to this opinion.

I have worked for several companies where a methodology or practice was adopted simply because it was popular at the time; no thought was given to whether or not it was suitable for the environment. This was a failure on the part of management: they should have conducted a proper risk assessment beforehand to determine its feasibility and and suitability. An estimate of the ROI would have been handy. But instead, utter chaos ensued, schedules slipped, projects failed, and employees left because there was a sense that no one knew what the hell was going on and no one was in control.

Exceptions to the rule? Undoubtedly. But realistic examples nonetheless. It happens. But it shouldn't.

Many of the mainstream, established methodologies and best practices are popular and widely implemented because they do, in fact, work. But they don't always work everywhere. I refer you to the old adage, "There is no silver bullet." You have to find the one that works for you; you don't just pick one out of thin air (at least, I hope not): you do your homework, and find one that is suitable for your business model, and maximizes the return on your investment. Maximum return for the least amount of risk.

Case in point: at one firm where I've worked, post mortems were ruled out as a bad political move. The development team was so small that the only ones who would provide meaningful input was the customer. Management didn't want the customer critiquing the software development process. Yet a post mortem is a valuable tool, and an established best practice for improving your overall process when it's done right.

What about code reviews? If you've only got one developer in your company, they're not feasible because that developer can't be truly trusted to review his own code objectively. Who's going to do it? Are you going to outsource it?

Truly small companies can't always adopt methodologies and practices designed to support larger development teams. It doesn't make any sense. Companies with extremely short development cycles can't adopt the methodologies appropriate to those who have the time to implement BUFD, and so forth. You have to be selective. And sometimes, you have to develop a custom methodology that works specifically for your firm, that may entail bits and pieces borrowed from other established methodologies.

Trying to shoe-horn your company into a methodology that doesn't fit you will only give you calluses in the most uncomfortable places. I am certainly not advocating that every company in the world should do its own thing. I never once advocated that. What I did do was denounce blind adherence to TNBT and best practices that weren't well-suited to your business model. When the need arises to deviate from what everyone else is doing because what they're doing doesn't work for you, it makes sense to entertain the notion of doing something different. Even then, you should only engage a different way of doing things if the risk involved in doing so is manageable and acceptable.

But hey, if that kind of an opinion makes me an amateur, so be it. I can live with that.


Thursday, April 5, 2007

Refactoring Garbage Disposal

One of the most common tasks in my data layer is cleaning up my connections, transactions, and data readers. I do it a lot. The established code block for cleaning up a disposable object looks something like this: 

   Public Sub ExecuteUpdate()

Dim connection As SqlConnection
Dim transaction As SqlTransaction
Dim command As SqlCommand

connection = New SqlConnection
transaction = connection.BeginTransaction

command = connection.CreateCommand()
command.CommandType = CommandType.StoredProcedure
command.CommandText = "UpdateSomeData"


Catch ex As SqlException


If Not command Is Nothing Then
End If

If Not transaction Is Nothing Then
End If

If Not connection Is Nothing Then
End If

End Try

End Sub

If you work with a lot of disposable objects (and I'm guessing that most developers do), you get to do a lot of this kind of stuff. The checks for Nothing (null in C#) are mandatory--if you think that a command or transaction object won't ever be null/nothing, boy, are you in for a surprise. Just wait until you try to invoke Dispose on one of those objects and it's not there.

It didn't take long for me to realize that the Finally block in my data access layer was ripe for refactoring. (The drawback was that much of the initial code was generated by a tool. Sucky part, that. But newer code uses the refactored stuff I'm about to show you.)

I hate repeating myself. I do. I really, really do. So I looked at that code and decided that I needed a class that would help me to safely dispose of objects. I needed a garbage disposal--kind of like your in-sink erator, where you can simply toss vegetables, egg shells, ice cubes, or whatever suits you, and it safely whisks them down the drain.

There were numerous questions. Should it be a separate class, or a base class? A base class implementation is problematic. It means that anyone that wants access to the methods needs to derive from it; I could see lots of classes that would want to use it but I didn't want to insert the class into the inheritance chain.

(Global modules were ruled out right off the bat. Don't even suggest it. Global functions, like global variables, leave a really bad taste in my mouth. Let's not pick hairs about what the compiler does behind my back. Just let it go, okay? Leave me to my AROCCF* ways.)

I settled on a separate class with Shared (C# static) methods. That way, you could access them on demand, as needed from anywhere. After all, the methods needed to maintain no class state. Everything they need is passed in.

The resulting class provides four overloads of a single method: DisposeOf. Two of the overloads are type-safe versions designed to provide specific handling for the SqlDataReader and SqlConnection objects, ensuring that they're properly closed before being disposed of. One overload takes a paramarray of IDisposable objects, iterates over it, and invokes the last DisposeOf overload.

All of the DisposeOf overloads invoke the one basic implementation, which takes an IDisposable parameter. That method simply ensures that its argument isn't null, thereby avoiding a NullReferenceException. If the argument isn't null, it invokes IDisposable.Dispose on it.

The net effect is that the Finally block is reduced to this:

End Try

 Or, in the best-case, scenario, to this:

Disposer.DisposeOf(command, transaction, connection)
End Try

There is one caveat to using this class to dispose of data access objects: you should always dispose of your transaction and connection last, and always in that order. Then again, I don't think that's due to this class. I think that's just the proper order of disposal. The overloaded version that takes the paramarray disposes of the objects in the order that you pass them.

If you have other IDisposable objects that you frequently use that require special handling, you can easily extend this class to help you out. This class was built for VB .NET 1.1, since we don't have the using keyword. The class makes it easier to clean up objects that should be cleaned up.

The code for this class follows below. You are free to take this code and use it or modify it however you wish. You aren't required to mention me, credit me, or even acknowledge that I exist. However, if it blackens your eye, bloodies your nose, or blows your foot off, remember that I don't exist. :)


Option Strict On


' Provides methods to assist in the safe disposal of objects.
Public NotInheritable Class Disposer

Private Sub New()
' Prevents instantiation
End Sub

' Safely disposes of an object. Avoids a
' NullReferenceException.
Public Shared Sub DisposeOf(ByVal item As IDisposable)
If Not item Is Nothing Then
End If
End Sub

' Safely disposes of a SqlDataReader. Avoids a
' NullReferenceException. If the reader is open,
' it is first closed.
Public Shared Sub DisposeOf(ByVal item As SqlDataReader)
If Not item Is Nothing Then
If Not item.IsClosed Then
End If
End If
End Sub

' Safely disposes of a SqlConnection object. Avoids a
' NullReferenceException. If the connection is opened,
' it is first closed.
Public Shared Sub DisposeOf(ByVal connection As SqlConnection)
If Not connection Is Nothing Then
If Not connection.State = ConnectionState.Closed Then
End If
End If
End Sub

' Safely disposes of an array of disposable objects.
Public Shared Sub DisposeOf(ByVal ParamArray items() As IDisposable)
For Each item As IDisposable In items
If TypeOf item Is SqlConnection Then
DisposeOf(DirectCast(item, SqlConnection))
ElseIf TypeOf item Is SqlDataReader Then
DisposeOf(DirectCast(item, SqlDataReader))
End If
End Sub


* AROCCF = Anal Retentive Obsessive Compulsive Control Freak

Monday, April 2, 2007

Refactoring Your Way to Enlightenment

Take a moment to stock of where you are now. What skills do you have? How can you improve them? Every day of your career, you should be learning something, improving something, refining something. Your skill set should be undergoing constant refactoring. This can only make you more efficient. If the stuff you're learning isn't making you more efficient, discard it.

At some point, you have to have the guts to go against the grain. Just because a "best practice" works for someone else at some other company doesn't necessarily make it a "best practice" for you and your company. A "proven methodology" isn't necessarily going to be a "proven methodology" for you. Have the guts to challenge the status quo. If it's not making you more efficient, it's likely hindering you. Refactor it out.

If your team doesn't have the funds to learn some new technique, seek that knowledge personally. There is no reason that your company's inability to fund team education should hold you back. Buy books and read. Search the Internet. Read blogs and programming newsgroups. Experiment with code. Ask your peers. Never stop seeking knowledge. Never stop learning.

Take some of your old code, copy it, and then refactor the hell out of it. You'll be surprised what you can learn by simply refactoring code: more efficient ways to implement things that you did before (and will likely do again), better algorithms that work faster, use less resources, and are easier to maintain. Refactoring improves your skill set. Refactoring your own code, on your own time, is a personal competition against yourself to improve your own skill set.

You don't need to compete against anyone else. Coding cowboys, platform fanboys, methodology purists, conspiracy shouldn't be worrying about them. You should worry about yourself. Make yourself as good as you can possibly be. Every day, ask yourself this essential question: "How can I improve myself today?" Find that way, and then do it. Set aside a little time every day to refactor your skill set.

Each day is an opportunity to make yourself a little bit better, a little more efficient than you were the day before. With each passing day, you have the opportunity to become smarter, faster, wiser, more valuable. But that means taking care to constantly revise your skill set. Have the wherewithal to discard habits and ideas that simply don't work. If you suspect you're doing something one way simply because you've always done it that way, or because that's the way everyone else does it, question it. If you can't see a tangible benefit to it, refactor it out.

Look, I'm not Gandhi or anything. But I can tell you this: I firmly believe that the key to success in this field is a personal commitment to growth. Don't trust anyone to just hand you knowledge, or to stumble across the skills you'll need. You have to actively reach out and take the skills and knowledge you need to be successful. It's an active task. It's not going to be something you just acquire through osmosis.

We all have to get to a point where we realize that we're not as efficient, not as smart, not as skilled, and nowhere near as good as we could be. There's always someone out there who's better than we are.

Our goal isn't to compete with them. Our goal is to constantly aspire to be better than we are right now, at this very moment.

Thursday, March 29, 2007

What Have You Learned Over the Last Year?

Software development is a dynamic field. It's also a vast field. We deal with a plethora of technologies that baffle most folks; when we start talking shop around our nontechnical friends, they tend to look at us with blank faces, blinking eyes, and that curious bit of spittle leaking out the side of their mouths.

And yet, we persevere. We work grueling hours, under often impossible schedules, with vaguely defined specifications ("What? Specifications you say? Hah!"), and management that is frequently more concerned with ship dates than code or product quality. None of these things is surprising to anyone who's worked more than six months in software development.

If you've had the pleasure to work in a company that actually cares about product quality and developer sanity, count yourself one of the lucky few. And do everything in your power to hold onto that job.

But this post isn't about our job quality. It's about what we developers do while we're solving incredibly complex problems with less than optimal tools, less time than we need and fewer hands and minds than we'd like.

We think, we create, we innovate.

Give some serious thought to some of the really tough problems you've faced over the last year. Not the ones where you had the right tools to solve the problem, or where you had the knowledge you needed to do it. Think about the ones where you were completely out of your depth, where you had no idea what you were doing, or how the heck you were going to to get out alive. You didn't have the knowledge. You lacked the tools. The clock was ticking. And you had a product to deliver.

And somehow, you survived and saved the day.

Experiences like this aren't new. They happen every day, in companies all around the world. Seemingly impossible demands are placed on developers, designers, DBAs, architects, testers, release engineers, technical writers, project managers, and everyone else involved in getting the product out the door. It's a monumental effort. In a lot of cases, development shops are severely understaffed, and the folks who work in them have to wear several hats--in the worst case scenario, one poor bastard gets the lucky job of wearing all the hats.  

And somehow, through all that mess, a product gets delivered. If it didn't, the company wouldn't be afloat. Sure, sometimes it doesn't happen. There are slippages, defects, embarrassments. But in the end, the product ships. Problems are solved. Work goes on. And the folks doing the work are learning. Evolving. Improving. Honing their craft.

If they're not, there's something seriously wrong.

At the end of every project, every major milestone, regardless of how well or poorly executed it was, how close to the predicted ship date it was, how riddled with defects it may have been, there's a chance to look back and reflect on what you learned from the experience. We learn from our mistakes and our successes. We learn what worked, what didn't, what can be improved, and what should be left alone.

This last year has been a whirlwind for me. I've learned a lot. For instance, a product that I thought was stellar when I first built it turned out to have lots of room for improvement. Sure, it was pretty, but it wasn't all that usable. And it was notoriously difficult to debug and maintain. And it was far too easy for users to enter invalid data. I'm pretty much coding in a vacuum here (and not by choice), so I didn't have the benefit of being able to ask my coworkers for advice. There were no Web designers to ask for input; no testers to rely on to catch my mistakes; no other developers to seek guidance from regarding better class models or unit testing methodologies.

But there were the users and their feedback. So I watched them, listened to them, and learned from them. I studied other user interfaces that I admired, and that users were passionate about. I studied other respected developers, designers and engineers. And I applied their philosophies to the subsequent releases.

The lessons I learned were simple, but breathtaking. Simplicity in design invariably leads to simplicity in testing. If it just works, it just works. Smaller, simpler interfaces are easier to test. Pages with thirty fields and all the validators to ensure they contain solid data are orders of magnitude harder to test. But smaller pages that contain fewer fields are faster, easier to use, easier to test, better encapsulated, and once you've determined that they're working, you're done with them. It radically changed my UI designed philosophy. And I'm happy to report that the software is far better as a result of it. It just works.

Every day, someone somewhere learns something that changes their skill set for the better. When things are looking down, and they think they want to get the hell out of Dodge, it helps to remember all the ways in which we've improved.

The learning never stops. The improvement never stops. But how often do we stop and think about what we've learned? Do we ever give ourselves credit for what we've become? Taking a moment to reflect on where we've been is a crucial part of directing our future; we can't just keep stumbling around repeating the same mistakes. So take a moment, look back, and ask yourself this very simple question: What have I learned over the last year?

You might just be surprised.

Monday, March 12, 2007

Wrap Those Session Variables!

An interesting topic came up on the ASP.NET newsgroup forums today. The question arose: "Is there a standard naming convention for session variables?"

While the debate meandered around the various conventions, and their suitability, and whether or not one even applied to session variables, my chief thought was whether or not one should even see the names of the session variables.

Enter the basic premise of information hiding. The ASP.NET Session object is, essentially a keyed collection. (Let's not bicker about it's internal implementation. List, hash table, whatever. It's essentially a keyed collection, accessed by a string key.)

My point is that the individual keys used to access the items inside should not be replicated throughout your software. What if you choose to change the name of the key? What if the key's name isn't descriptive enough, and needs to be refined?

Sure, you could use a constant, but where do you put it? In a global constants file? That's rather icky. As anyone knows, a global constants file can quickly grow quite large, and navigating it can become a nightmare in and of itself. Then there's the problem of scoping. What if you have two constants with very similar names, and you want them both to have global scope? Now you get to mangle the name of one of them. Good luck choosing.

The ideal solution is to encapsulate the session variables in a class that manages them. This isn't as difficult as it seems, and it provides a host of benefits. The class is relatively straightforward, as shown below.

Option Explicit On
Option Strict On

Imports System.Web.SessionState

Friend NotInheritable Class MySession

Private Const CompanySessionKey As String = "Company"

Private Sub New()
End Sub

Private Shared ReadOnly Property Session As HttpSessionState
Return HttpContext.Current.Session
End Get
End Property

Public ReadOnly Property IsValid() As Boolean
Return Not Session Is Nothing
End Get
End Property

Public Shared Property Company As String
Return DirectCast(Session(CompanySessionKey), String)
End Get
Set(ByVal value As String)
= value
End Set
End Property

End Class

A few things to note about this class:

  • The class has Friend scope. It can't be accessed outside of the Web application. That's just sound security.
  • The Private constructor prevents instantiation. Since all of the other methods are marked Shared, it makes no sense to instantiate an instance of this class.
  • The Session property returns an instance of the current session. This property is marked Private and Shared, and completely hides how we're getting the session information.
  • The IsValid property returns True if we have a valid session object. This helps us to avoid calling the various properties on the MySession class if there isn't a valid session to begin with. This might be the case in an exception handler.
  • The Company property is marked Public and Shared, and is responsible for getting a value out of the session, and putting it into the session. It uses a class-scope constant to reference the session variable, ensuring that both the getter and setter reference the same session variable. Further, the property is strongly typed. When you call this property, the session variable is already converted to the appropriate data type.

Creating this class provides a clean, straightforward way to reference session variables in your code. For example, consider the following (not uncommon) code sample:

lblCompany.Text = DirectCast(Session("company"), String)

Using the class described above, this code is reduced to the following:

lblCompany.Text = MySession.Company

Now, that may not look like much to you now, but imagine what it will save you when you have lots of session variables. And imagine how much easier it will be to refactor those variables should the need arise to do so. 

Finally, you can provide code comments in a centralized place that document what the session variables mean and are used for. That, in and of itself, is a huge boon to productivity. A little clarity goes a long way.

And just to show that I eat my own dog food, I use this very class in my own production software. And it does work, and it does save lots of time. I don't have to remember what that cryptic string is that retrieves a specific session variable. Instead, I type "MySession." and IntelliSense presents me with a list of available session variables.

You might be wondering where the exception handling code is. There isn't any, and that's by design. If a session variable is missing in a getter, it doesn't do me any good to catch that condition and rethrow it--I won't be telling the caller anything that the exception won't already be telling them. My exception handling code in the pages and the fallback exception handler in global.asax are responsible for cleanly handling those exceptions.

It also doesn't do me any good to handle the NullReferenceException that will be thrown if I try to reference the Session and it's not there. Again, the global exception handler will take care of it. I could, of course, wrap the exception in a custom ApplicationException, and you always have that option. Then again, I could always perform the check by calling MySession.IsValid before attempting to retrieve any properties, and avoid the exception altogether.

So there you have it. It's not a hard class to implement, but it pays off pretty well. For a little upfront effort, you get a decent return on your investment. Your code's readability and maintainability improve remarkably, and you know that you can refactor it with a high degree of safety. Further, you can document those session variables in the code, close to the statements that get and set them. And if you decide that you no longer want to use certain session variables, you can easily deprecate them by applying the Obsolete attribute to the properties to quickly identify every point in the code that's using them.

So think about it. And if it's worth your time, implement it. I think you'll be glad you did.

Thursday, February 1, 2007

On Self-Control and Software Development

Just today, as I was working to deliver a major release on a product I'm working on, I found myself sidetracked by a little project of my own.

You see, there's this little problem with one of the data fields in the database. It's not major, just an annoyance, like a four year old poking you in the ribs for an hour, asking repeatedly, "Does this bug you?"

Well, it's been bugging me for ages. And I found myself today doing database queries and pasting data into Excel to have Excel build update queries for me using formulas (nice little time saver that is) so that I could include those statements in the SQL script to accompany the next major release.

And then it hit me: no one asked for this. It's not included in the test plan for this release. It's gold plating. I'm doing this because I want to, not because the customer asked me to.

Whoa, there, cowboy. Get a grip on yourself. Set that stuff aside, and focus on what you need to do, and not what you want to do. There are far more important deliverables to worry about, and you don't have time to waste on unauthorized features or fixes. Especially when those fixes are for issues that don't negatively impact the application. (It was a display issue--first name before last name.) It's just fluff.

In reflection, I find myself experiencing these kinds of monumental self-control issues all the time. I get really excited about the things I could do for the customer, and I really want to do them for them. But the truth is that just because I can do something for them, it doesn't mean that I should do it.

Any change that I make to the product has the potential to introduce new defects into the system. That's why every change that I make to it must be tested.  It's why there's so much testing involved in software. (And if there isn't, something's seriously wrong.) And the testing doesn't just occur here, at my desk. It happens at the client. The product undergoes rigorous user acceptance testing. And testing isn't cheap--it consumes precious man hours, which equates to someone's hourly wages. And if I haven't gotten it right, it has to be fixed and retested. It can amount to massive amounts of money in man hours of testing.

Lets not forget the impact that the change has on updating the test plan, the release notes, requirements documentation, and user guides. Plus any associated costs with reprinting and redistributing them.

And what happens if the customer decides that my unauthorized change needs to be taken out? What if its impact on the system is so drastically negative that it must be removed? Can it be easily rolled back? And if it must be removed, what are the costs associated with doing so, and republishing all the updated documentation and builds?

Are you getting my point yet? The cost of a simple change isn't just what it takes me to code and test it at my desk. That's just the tip of a massive iceberg.

It takes a lot of self-control to prevent myself from adding features to a system when those features aren't (1) requested by the customer, (2) included in the project plan, and (3) absolutely critical to the current release.

The problem, I think, is that a lot of developers out there, myself included, don't get sufficient mentoring in the discipline of self-control when it comes to software development.

For example, we're all hailing the virtues of refactoring code to improve its maintainability, and I agree that that's a good and useful thing. But how many developers know that just because you can refactor a piece of code doesn't mean that you should? How many developers are out there bogging down project schedules because they're busy refactoring code when they should be developing software that meets the requirements for the project deadline?

(And here, I will sheepishly raise my hand.)

It occurs to me that before I ever modify a piece of code, before I ever touch that keyboard and write any new class or method, or create any new window or Web page, I should be asking myself, "Is this in the project plan? Is it critical to the current release?" If it doesn't satisfy one of those questions, I shouldn't be doing it.

The key to getting that product out the door on time is staying focused, and not getting sidetracked by fluff. Take it from someone with experience: it's easy to get sidetracked by fluff. Adding cool features is easy to do, because you're excited about it, and motivated to do it. Working on the required deliverables is hard work; it requires discipline and self-control. You have to stay focused and keep your eyes on the target. (You thought I was going to say "ball," didn't you?)

But we, as human beings, don't want to do what we need to do, we want to do what interests us, and what excites us. It takes an act of sheer will to resist that urge, to restrain ourselves, and get the real work done. I would imagine that one of the things that separates a mature developer from a novice developer is quite likely his or her ability to resist that urge to introduce fluff into software.

In the end, I think it might be a good idea if programming courses included curricula on self-control as a discipline for developers. And I mean that quite seriously. We need to have it drilled into our heads that we shouldn't be adding anything to the product that only serves our own sense of what's cool or useful. That's not to say that sometimes developers can't predict useful features before the users do; but they cannot and should not be introduced haphazardly into a product: they should be included as a planned feature as part of a scheduled release, so that they can be adequately tested and documented, and not just suddenly sprung upon someone as an easter egg.

There's a time and a place for everything. Gung-ho initiative has its proper place; software isn't one of them.

Sunday, January 28, 2007

Coding on Self-Destruct

The title of this post might lead you to believe that I'm going to write about bad coding practices. Depending on your point of view, you might be right. If you're a product or project manager, that might be your position. If you're a developer, it might not. But I'm going to ramble for a bit about my personal approach to software development.

I basically have two philosophies when it comes to software development:

  • A software developer's job is not to write software. It is to service the customer. The fact that he writes software to do it is entirely coincidental.
  • In order to do his job, a software developer must write code that is so well written and so well documented that he or she can be replaced at any time without impacting the project (or, by extension, the customer). 

You may think that the second one is a lofty goal. But a goal should be lofty; it's something to aim for. If it was easily attainable, anyone could do it.

Basically, it comes down to this: I'm obsessive about software quality. Not just coding, but the entire process: everything from the initial requirements gathering to the delivery of the product. I have to be; where I work, I'm a one man software development shop. So the attitude that I have to take when it comes down to it is that my job is to put myself out of work. I have to do work that is of such high quality that I am eventually no longer needed. That is what I mean by coding on self-destruct. So my personal promise to the customer is to deliver high quality software to the customer on time within budget.

When I prepare a user guide, I know that real people have to use it. They have to be able to read it. I know how frustrated I get when I pick up some piece of software and it refers me to the user guide and its some poorly slapped together rich text file or help file that isn't properly indexed or cross-referenced. So I take the time to make sure that the user guide is up to date, has  table of contents and an index, that it has full coverage of the software, lots of illustrations and how-to guides, and everything the users might need to know. I use plain English, not a bunch of mumbo job that only tech geeks would enjoy. The users are not developers. The same thing holds true for the requirements specifications and test plans. You have to know your target audience and use the language appropriate for them.

(Right now, however, there seems to be a big push in the industry to move away from what they are scathingly referring to as BUFD (Big Up-front Design). I take issue with the idea that we shouldn't invest large amounts of time up front with the customer to determine their needs and wants. I think that we're treading on thin ice when we slap a system together and foist it upon them without sufficiently planning it out; I don't know that we need to invest up to a year in analysis and design, but I think that we need to spend enough time in analysis and design to ensure that the system's scope doesn't grow out of control, and to ensure that everyone knows what the system is supposed to be and what it isn't. The costs associated with ripping out unwanted features and adding new ones that weren't identified due to a lack of unwanted planning are enormous. Every time you do that, you introduce the risk of requiring full regression testing, system down time for redeployment, reprinting of the manuals, and retraining of the users. It's expensive. Adequate planning can mitigate those costs.)

When I write the code, I write it for developers. But even then, I don't have any idea what the experience level of my successor will be. Will he or she have my level of experience? If not, they're going to have a tough time picking up my code and maintaining it unless I make it pretty darned easy to understand and maintain. That means self-documenting code, using a standard naming convention, a consistent coding model, and thorough use of comments.

If I get hit by a bus, the company can't afford to have the project come to a stand-still for six months while someone tries to learn what the heck I was doing. It's my responsibility to mitigate the amount of time it takes my successor to get up to speed. So I do that. And any responsible programmer should be doing that as well.

The payoff isn't just for my successor. It benefits me as well. I find that my own code is far easier to understand and maintain as I go back in to make defect corrections and add new features or remove those that have become obsolete. It takes me far less time to understand what a particular piece of code is doing and why if I've coded it consistently and commented it, than if if I haven't.

Everything that is required to build or deploy the software is stored in the source code repository: source code files, images, build files, batch files that prepare the build, SQL scripts, etc. I also store the release notes (Word documents), the user guide, test plan, requirements documents, and related documentation in it. It's a myth that you can't store these files in a source code repository. They're just binary files, and you'll get full version control on them.

I am always looking for ways to improve the visibility of the project. High project visibility gives the stakeholders a sense of involvement; they don't feel like they're sinking lots of money into a black hole and just hoping that something will come out in the end. To that end, constant communication with them is vital. They must always feel like their input is important; after all, it's their baby, and whatever I'm producing is for them, not me. They need to know that it's alive and kicking, that it's growing, a living breathing thing. Emails, phone calls, conference calls, a project web site, and on-site visits with demonstrations of the product go a long way towards keeping them abreast of its state. The customer's sense of involvement is an important part of the software's quality. They provide important feedback throughout the development of the product that will prevent me from making potentially costly mistakes in the design of the product that weren't caught in the initial analysis. Further, their review of the system might alert them to needs that they weren't aware of earlier--some of them critical, and some that can be slated for future releases.

We use a Web-based defect tracking system so that our customers can report defects as they find them. We categorize them and tackle critical defects first. Critical defects are those that result in a system crash or data corruption. After that we deal with high-priority defects: those that generate error messages. Next come medium-priority defects: features that don't generate error messages, but don't behave the way they should, but have workarounds. Then it's Low-priority defects: cosmetic issues, such as font problems, spelling errors, and so forth. Feature requests are an entirely different category.

Our goal is to have zero defects in the database at all times. If I find a defect that isn't in the database, I report it, and then I strive to address it in the current release on the development server. The smallest defects irritate me. Some folks don't understand why. But they do.  When that software goes out the door, it's something that I made, the work of my hands. It bothers me when I know that it's going out there with a defect in it. And it's funny, because I know that there's no such thing as defect-free software.

Still, I am reminded of a passage from David Eddings' book, Pawn of Prophecy, in which the young Garion asks Durnik the smith why he bothered to fix a broken part on a wagon. It was in a location that no one would ever see. No one would know it was there, Garion had told him. "But I'll know," Durnik had replied.

And that's how I feel about defects. No one else will know these little defects are there. But I do. They may never crop up, but I'll know they're there. And it's my job to get rid of them. I will never be satisfied until every last one of them is gone. Because in my eyes, my promise to the customer, to deliver high-quality software on time and within budget, hasn't been met until those defects are gone.

A high quality application, whether it runs in a browser or on the desktop, needs to have an interface that is clean, consistent, visually appealing, and easy to use. It should provide lots of visual cues to the user about the task at hand. It shouldn't leave them guessing about what they're doing. If it's a data entry form, it should provide plenty of immediate data validation that helps them to enter good data rather than fights them in their efforts to do so. It should never make it easy for them to lose their work. ("You are about to discard your changes. Are you sure you want to do this?") It should use plain English or language from their particular domain to describe the tasks at hand, and not technical jargon.

An application should never assume that the user's vision is as good as the developer's or the designer's. Too many applications out there rely solely on color to distinguish between records, forgetting about colorblind users. Or they use very small fonts or window/page sizes, forgetting about users that run their screens at very low screen resolutions because they have poor vision. This only creates an unwieldy application that requires lots of scrolling for users of Web applications, or the use of obscure key commands to manipulate windows and get them to move around the screen.

I can't cover everything in one post. But I think you get the point. Software quality covers the full spectrum of the development process. There's room for improvement in the entire process. And improving it is an iterative process. You do it, then you review your process, and seek to improve it. With each iteration, you get a little better at it.

I don't write software for a living. I work to ensure that my customers can get their jobs done as quickly as possible, and with a minimal amount of hassle. It is purely a coincidence that tools of my trade happen to be a compiler and a computer. All of these processes that I have described are merely ways in which I strive to ensure that my customers are happy, so that I can eventually walk away from that project, knowing that they don't need me anymore, that the project will run just fine without me.

When that day gets here, then I'll know my job is done.

Thursday, January 25, 2007

Debugging JavaScript in Visual Studio.NET

I've often been frustrated by the difficulty of testing client-side script in my .NET Web applications. So, being the Google-savvy user that I am, I set out to find a solution and stumbled across Walt Ritscher's post, which I will shamelessly quote here, because I like to have this kind of information handy:

  1. Enable client-side script debugging in Internet Explorer
    1. Open Microsoft Internet Explorer.
    2. On the Tools menu, click Internet Options.
    3. On the Advanced tab, locate the Browsing section, and uncheck the Disable script debugging check box, and then click OK.
    4. Close Internet Explorer.
  2. In your JavasSript function add the keyword debugger . This causes VS.NET to switch to debug mode when it runs that line.
  3. Run your ASP.Net application in debug mode.

I've enthusiastically tested this little tidbit, and determined that it works exactly as he describes. Thanks, Walt! This tip is a life-saver.

However, I ran into an interesting problem: when I place the debugger keyword inside an .aspx file, Visual Studio .NET loads the wrong page into the debugger. Instead of stepping through script code, I'm stepping through HTML. It's quite peculiar. If I remove the debugger keyword from inline-script (that is, script that occurs between <SCRIPT> and </SCRIPT> tags in the page itself) and put it inside an included script file, everything works fine.

Apparently, Visual Studio is having some difficulty with this sort of thing. The solution, of course, is to write a simple JavaScript include file that invokes the debugger, and include it whenever I want to invoke it. It's a simple (and yet inconvenient) workaround. It also complicates my build process, since it's one more file I have to make sure I remove from the shipped product.

But, I am able to debug script, and that's a godsend in and of itself. Just being able to step through JavaScript code, and watch the variables in the local window is more than enough to make up for the hassles of an include file and an additional line in my build script.