Thursday, June 7, 2007

Finally, Someone Dares to Say it

From an article highlighted on Digg:

Intelligent extra-terrestrials almost certainly exist on distant planets beyond our solar system, leading British astronomers told the government yesterday.The scientists expect that the first evidence of primitive alien life, such as microbes and vegetation, will emerge within 10 years, with more substantial finds following future space missions.

I can't believe that anyone with a brain larger than the size of a peanut believes that in a universe the size of ours, we're the only planet with life on it. Statistically speaking, the chances are astronomically remote that we're the only planet that harbors life. (Yeah, that's a cruddy choice of words. Sue me later.) It would be the height of human arrogance to think that our world is the pinnacle of evolution throughout the known universe, and to place ourselves at the top of the evolutionary food chain.

But hey, what do you expect from a bunch of talking apes? (Especially a world where the biggest, loudest, most idiotic chimp of all was "elected" the ruler of the most powerful tribe of apes.)

Anyway, when you take into account the size of the universe, and the number of worlds that we're starting to find, the chances of finding one with primitive life begin to rapidly increase. Every time we find a world with primitive life, the chances of finding one with more evolved life forms similarly increases. The further away from our world we move, the greater the chances of finding one with life forms similar to or more advanced than our own.

Does that mean they'll come zipping across the interstellar spaces in vast fleets of gleaming starcrafts to invade or embrace us? Hell no. They'll be just as constrained by the laws of physics as we are. But they're out there. Somewhere.

In my mind, the laws of statistics and probability are every bit as valid as the laws of physics.

read more | digg story

Wednesday, June 6, 2007

Dew is to Water As Want is to Need

On today's Coding Horror entry, Jeff Atwood brings up a really interesting point about the power of observing users versus asking them. Paraphrased, what users actually want is typically not what they think they want or tell you they want.

It's funny and sad because it's true.

It took me a long time to understand the difference between want and need. I may want a nice, tall fizzy bottle of Mountain Dew, but my body needs water. It doesn't need Mountain Dew. Sure, Mountain Dew tastes better, and I like the fizz, and I look way more cool when I'm holding it, but I don't need it. I need water to hydrate my body and keep me alive. There's a big difference.

Similarly, when users tell you they want a piece of software that does X, Y, and Z, what they usually need is something that does A and B. (Usually, A and B are something on the order of "It works well" and "It doesn't corrupt my data.")

Nonetheless, trying to get users to tell you what they need is akin to extracting molars from a chicken. It's nearly impossible. They'll give you something like this:

  • It has to look really good. You know, like SILF. (Software I'd Like to @#$%)
  • It has to be fast. Really fast. Like, it has to be so fast that I get whiplash when it starts up.
  • It can't hoard memory. Cuz I'm using Windows 98. In fact, can you make it use no memory at all?
  • It has to be secure. Really secure. Like, Fort Knox secure. Oh, but I want to be able to pass it around on the Internet and share it with all my friends. Or on a USB drive. Or whatever. Ooh! BitTorrent!
  • It has to be a Web app to. With Flash. In fact, do it all in Flash. But I have to be able to use it on my cell phone. And on my XBox. And FireFox. FireFox totally pwnz Micro$oft.
  • Everything has to be done in my company's colors: Black and brown. I want all the text in this really cool dark brown color, and the background all has to be black! It's bitchin'! And flames everywhere! And I have this cool soundtrack I want to play throughout the whole thing! And every button should be a different color, and they should make machine gun sounds when you click them! And then explode with a giant fireball!
  • I've only got a budget of $500.

Oh, God. The horror. The humanity!

While this list is obviously heavily laden with hyperbole, it's not too far from the truth. A completely user-driven set of application specifications, expressing their wants would likely provide nothing that they needed. At some point, you have to realize that users are, in fact, dreaming about toys.

In the end, software is about enabling users to get their work done faster and easier with a minimal amount of hassle. But users don't know that. Largely, they think it's about fun. Not all software is a first-person shooter or a massively multiplayer online role-playing game. (Sadly.) As a member of a software team, it's our job to identify their needs, and create software that meets those needs without getting sidetracked.

If you're starting a new system from scratch, don't waste your time asking the users what they want. Ask them what they need. Explain the difference to them. (Start by making sure you understand it yourself.) Explain that everything that isn't needed that is added to a product incurs additional cost and delays the delivery of the product, and that someone has to pay for it.

Build systems into your products to monitor which features are being used the most. Don't trust users to tell you. They aren't thinking about it. This kind of information is very useful for determining where to spend your time improving the product. It's also useful for identifying features that you think are critical (in business applications, especially) that should be used, but aren't being used. You can figure out why they aren't being used and target those areas for resolution.

If you write a solid subsystem like that for your software, you won't have to rely on user feedback, which isn't always reliable. A solid monitoring system will not lie to you. And that information will help you make much more intelligent decisions in the future. It's a question of silently observing the users as they actively use the product, instead of asking them about it after they've done so. All of us have a pretty short attention span when it comes to software use. I have no idea what the most used commands are in any given software package based on my own usage scenarios. For that reason, you couldn't ask me to tell you what commands or features I use and get a meaningful answer from me.

So think about whether or not you need it. If you do, invest the time to do it. But don't do it because you want it.

Dew? Or water?

Tuesday, April 24, 2007

To Flash, or Not to Flash

Considering the design of an all-Flash website? Struggling with the difficult choices involved? Risks got you stumped? Consult this handy chart to help you out.

It'll vastly simplify your decision making process!

Tuesday, April 10, 2007

Yet Another Amateur Opinion

So, here I am, reading The Braidy Tester, where I frequently lurk, because, well, I like the way the guy thinks. And recently, he posted an article that I agreed with so I finally mustered up the courage to kick down the closet door and post a response.

The basic premise of the article (which you can find here) was that we should occasionally entertain the notion of organizing project teams around features instead of skill sets. I agreed. In my response, I said:

Going against the grain is scary but sometimes vital in our industry. It takes courage, and always involves risk. But calculated risk isn't always a bad thing.

I'm not a fan of blindly adhering to "established best practices," "the Next Big Methodology (TM)," or established team models. Just because something is a best practice at Acme Corporation doesn't mean it's a best practice at Real World Inc. The business model is different, the staffing patterns are different, the problem domains are different, *everything* is different. You have to be willing, at some point, to accept he idea that a best practice for *them* isn't necessarily a best practice for *you.*

Be flexible. Do something different. Experiment. Find out what actually works, and makes you more effective. Make the leap. At the end of it all, if you find out that it didn't work, you'll at least come out of the experiment knowing something that you didn't know before. And the acquisition of knowledge is never a wasted effort.

(Note that that's not a vanity quote. It's done for clarity.)

The operative words in that post are "blindly" and "calculated." However, a subsequent poster referred to my opinion as "amateur."

Pecker-waving aside, how does one define an "amateur opinion"? I doubt very highly that the poster has any idea what my level of experience is. I also doubt that he carefully read my post, and skipped right over the key words, blindly and calculated.

For whatever it's worth, I'm always willing to accept the fact that there are an innumerable number of people out there that know far more than I do. They have scads more experience than I do. But the last time I checked, you didn't have to have a license or a degree to have an opinion.

So, despite the fact that I apparently lack the credentials to post an opinion, I'll simply refer the reader to the 1st Amendment. And then I'll ask the reader to read what I said, and then think about what I said, and then ask about my experience before labeling me an amateur.

I assure you, sir, I am not. I am familiar with the phenomena that have led me to this opinion.

I have worked for several companies where a methodology or practice was adopted simply because it was popular at the time; no thought was given to whether or not it was suitable for the environment. This was a failure on the part of management: they should have conducted a proper risk assessment beforehand to determine its feasibility and and suitability. An estimate of the ROI would have been handy. But instead, utter chaos ensued, schedules slipped, projects failed, and employees left because there was a sense that no one knew what the hell was going on and no one was in control.

Exceptions to the rule? Undoubtedly. But realistic examples nonetheless. It happens. But it shouldn't.

Many of the mainstream, established methodologies and best practices are popular and widely implemented because they do, in fact, work. But they don't always work everywhere. I refer you to the old adage, "There is no silver bullet." You have to find the one that works for you; you don't just pick one out of thin air (at least, I hope not): you do your homework, and find one that is suitable for your business model, and maximizes the return on your investment. Maximum return for the least amount of risk.

Case in point: at one firm where I've worked, post mortems were ruled out as a bad political move. The development team was so small that the only ones who would provide meaningful input was the customer. Management didn't want the customer critiquing the software development process. Yet a post mortem is a valuable tool, and an established best practice for improving your overall process when it's done right.

What about code reviews? If you've only got one developer in your company, they're not feasible because that developer can't be truly trusted to review his own code objectively. Who's going to do it? Are you going to outsource it?

Truly small companies can't always adopt methodologies and practices designed to support larger development teams. It doesn't make any sense. Companies with extremely short development cycles can't adopt the methodologies appropriate to those who have the time to implement BUFD, and so forth. You have to be selective. And sometimes, you have to develop a custom methodology that works specifically for your firm, that may entail bits and pieces borrowed from other established methodologies.

Trying to shoe-horn your company into a methodology that doesn't fit you will only give you calluses in the most uncomfortable places. I am certainly not advocating that every company in the world should do its own thing. I never once advocated that. What I did do was denounce blind adherence to TNBT and best practices that weren't well-suited to your business model. When the need arises to deviate from what everyone else is doing because what they're doing doesn't work for you, it makes sense to entertain the notion of doing something different. Even then, you should only engage a different way of doing things if the risk involved in doing so is manageable and acceptable.

But hey, if that kind of an opinion makes me an amateur, so be it. I can live with that.

—Mike

Thursday, April 5, 2007

Refactoring Garbage Disposal

One of the most common tasks in my data layer is cleaning up my connections, transactions, and data readers. I do it a lot. The established code block for cleaning up a disposable object looks something like this: 

   Public Sub ExecuteUpdate()

Dim connection As SqlConnection
Dim transaction As SqlTransaction
Dim command As SqlCommand

Try
connection = New SqlConnection
transaction = connection.BeginTransaction

command = connection.CreateCommand()
command.CommandType = CommandType.StoredProcedure
command.CommandText = "UpdateSomeData"
command.ExecuteNonQuery()

transaction.Commit()

Catch ex As SqlException
transaction.Rollback()

Finally

If Not command Is Nothing Then
command.Dispose()
End If

If Not transaction Is Nothing Then
transaction.Dispose()
End If

If Not connection Is Nothing Then
connection.Dispose()
End If

End Try

End Sub

If you work with a lot of disposable objects (and I'm guessing that most developers do), you get to do a lot of this kind of stuff. The checks for Nothing (null in C#) are mandatory--if you think that a command or transaction object won't ever be null/nothing, boy, are you in for a surprise. Just wait until you try to invoke Dispose on one of those objects and it's not there.


It didn't take long for me to realize that the Finally block in my data access layer was ripe for refactoring. (The drawback was that much of the initial code was generated by a tool. Sucky part, that. But newer code uses the refactored stuff I'm about to show you.)


I hate repeating myself. I do. I really, really do. So I looked at that code and decided that I needed a class that would help me to safely dispose of objects. I needed a garbage disposal--kind of like your in-sink erator, where you can simply toss vegetables, egg shells, ice cubes, or whatever suits you, and it safely whisks them down the drain.


There were numerous questions. Should it be a separate class, or a base class? A base class implementation is problematic. It means that anyone that wants access to the methods needs to derive from it; I could see lots of classes that would want to use it but I didn't want to insert the class into the inheritance chain.


(Global modules were ruled out right off the bat. Don't even suggest it. Global functions, like global variables, leave a really bad taste in my mouth. Let's not pick hairs about what the compiler does behind my back. Just let it go, okay? Leave me to my AROCCF* ways.)


I settled on a separate class with Shared (C# static) methods. That way, you could access them on demand, as needed from anywhere. After all, the methods needed to maintain no class state. Everything they need is passed in.


The resulting class provides four overloads of a single method: DisposeOf. Two of the overloads are type-safe versions designed to provide specific handling for the SqlDataReader and SqlConnection objects, ensuring that they're properly closed before being disposed of. One overload takes a paramarray of IDisposable objects, iterates over it, and invokes the last DisposeOf overload.


All of the DisposeOf overloads invoke the one basic implementation, which takes an IDisposable parameter. That method simply ensures that its argument isn't null, thereby avoiding a NullReferenceException. If the argument isn't null, it invokes IDisposable.Dispose on it.


The net effect is that the Finally block is reduced to this:

      Finally
Disposer.DisposeOf(command)
Disposer.DisposeOf(transaction)
Disposer.DisposeOf(connection)
End Try

 Or, in the best-case, scenario, to this:

      Finally
Disposer.DisposeOf(command, transaction, connection)
End Try

There is one caveat to using this class to dispose of data access objects: you should always dispose of your transaction and connection last, and always in that order. Then again, I don't think that's due to this class. I think that's just the proper order of disposal. The overloaded version that takes the paramarray disposes of the objects in the order that you pass them.


If you have other IDisposable objects that you frequently use that require special handling, you can easily extend this class to help you out. This class was built for VB .NET 1.1, since we don't have the using keyword. The class makes it easier to clean up objects that should be cleaned up.


The code for this class follows below. You are free to take this code and use it or modify it however you wish. You aren't required to mention me, credit me, or even acknowledge that I exist. However, if it blackens your eye, bloodies your nose, or blows your foot off, remember that I don't exist. :)


—Mike

Option Strict On

Imports
System.Data.SqlClient

' Provides methods to assist in the safe disposal of objects.
Public NotInheritable Class Disposer

Private Sub New()
' Prevents instantiation
End Sub

' Safely disposes of an object. Avoids a
' NullReferenceException.
Public Shared Sub DisposeOf(ByVal item As IDisposable)
If Not item Is Nothing Then
item.Dispose()
End If
End Sub

' Safely disposes of a SqlDataReader. Avoids a
' NullReferenceException. If the reader is open,
' it is first closed.
Public Shared Sub DisposeOf(ByVal item As SqlDataReader)
If Not item Is Nothing Then
If Not item.IsClosed Then
item.Close()
End If
End If
End Sub

' Safely disposes of a SqlConnection object. Avoids a
' NullReferenceException. If the connection is opened,
' it is first closed.
Public Shared Sub DisposeOf(ByVal connection As SqlConnection)
If Not connection Is Nothing Then
If Not connection.State = ConnectionState.Closed Then
connection.Close()
End If
connection.Dispose()
End If
End Sub

' Safely disposes of an array of disposable objects.
Public Shared Sub DisposeOf(ByVal ParamArray items() As IDisposable)
For Each item As IDisposable In items
If TypeOf item Is SqlConnection Then
DisposeOf(DirectCast(item, SqlConnection))
ElseIf TypeOf item Is SqlDataReader Then
DisposeOf(DirectCast(item, SqlDataReader))
Else
DisposeOf(item)
End If
Next
End Sub

End
Class

* AROCCF = Anal Retentive Obsessive Compulsive Control Freak

Monday, April 2, 2007

Refactoring Your Way to Enlightenment

Take a moment to stock of where you are now. What skills do you have? How can you improve them? Every day of your career, you should be learning something, improving something, refining something. Your skill set should be undergoing constant refactoring. This can only make you more efficient. If the stuff you're learning isn't making you more efficient, discard it.

At some point, you have to have the guts to go against the grain. Just because a "best practice" works for someone else at some other company doesn't necessarily make it a "best practice" for you and your company. A "proven methodology" isn't necessarily going to be a "proven methodology" for you. Have the guts to challenge the status quo. If it's not making you more efficient, it's likely hindering you. Refactor it out.

If your team doesn't have the funds to learn some new technique, seek that knowledge personally. There is no reason that your company's inability to fund team education should hold you back. Buy books and read. Search the Internet. Read blogs and programming newsgroups. Experiment with code. Ask your peers. Never stop seeking knowledge. Never stop learning.

Take some of your old code, copy it, and then refactor the hell out of it. You'll be surprised what you can learn by simply refactoring code: more efficient ways to implement things that you did before (and will likely do again), better algorithms that work faster, use less resources, and are easier to maintain. Refactoring improves your skill set. Refactoring your own code, on your own time, is a personal competition against yourself to improve your own skill set.

You don't need to compete against anyone else. Coding cowboys, platform fanboys, methodology purists, conspiracy theorists...you shouldn't be worrying about them. You should worry about yourself. Make yourself as good as you can possibly be. Every day, ask yourself this essential question: "How can I improve myself today?" Find that way, and then do it. Set aside a little time every day to refactor your skill set.

Each day is an opportunity to make yourself a little bit better, a little more efficient than you were the day before. With each passing day, you have the opportunity to become smarter, faster, wiser, more valuable. But that means taking care to constantly revise your skill set. Have the wherewithal to discard habits and ideas that simply don't work. If you suspect you're doing something one way simply because you've always done it that way, or because that's the way everyone else does it, question it. If you can't see a tangible benefit to it, refactor it out.

Look, I'm not Gandhi or anything. But I can tell you this: I firmly believe that the key to success in this field is a personal commitment to growth. Don't trust anyone to just hand you knowledge, or to stumble across the skills you'll need. You have to actively reach out and take the skills and knowledge you need to be successful. It's an active task. It's not going to be something you just acquire through osmosis.

We all have to get to a point where we realize that we're not as efficient, not as smart, not as skilled, and nowhere near as good as we could be. There's always someone out there who's better than we are.

Our goal isn't to compete with them. Our goal is to constantly aspire to be better than we are right now, at this very moment.

Thursday, March 29, 2007

What Have You Learned Over the Last Year?

Software development is a dynamic field. It's also a vast field. We deal with a plethora of technologies that baffle most folks; when we start talking shop around our nontechnical friends, they tend to look at us with blank faces, blinking eyes, and that curious bit of spittle leaking out the side of their mouths.

And yet, we persevere. We work grueling hours, under often impossible schedules, with vaguely defined specifications ("What? Specifications you say? Hah!"), and management that is frequently more concerned with ship dates than code or product quality. None of these things is surprising to anyone who's worked more than six months in software development.

If you've had the pleasure to work in a company that actually cares about product quality and developer sanity, count yourself one of the lucky few. And do everything in your power to hold onto that job.

But this post isn't about our job quality. It's about what we developers do while we're solving incredibly complex problems with less than optimal tools, less time than we need and fewer hands and minds than we'd like.

We think, we create, we innovate.

Give some serious thought to some of the really tough problems you've faced over the last year. Not the ones where you had the right tools to solve the problem, or where you had the knowledge you needed to do it. Think about the ones where you were completely out of your depth, where you had no idea what you were doing, or how the heck you were going to to get out alive. You didn't have the knowledge. You lacked the tools. The clock was ticking. And you had a product to deliver.

And somehow, you survived and saved the day.

Experiences like this aren't new. They happen every day, in companies all around the world. Seemingly impossible demands are placed on developers, designers, DBAs, architects, testers, release engineers, technical writers, project managers, and everyone else involved in getting the product out the door. It's a monumental effort. In a lot of cases, development shops are severely understaffed, and the folks who work in them have to wear several hats--in the worst case scenario, one poor bastard gets the lucky job of wearing all the hats.  

And somehow, through all that mess, a product gets delivered. If it didn't, the company wouldn't be afloat. Sure, sometimes it doesn't happen. There are slippages, defects, embarrassments. But in the end, the product ships. Problems are solved. Work goes on. And the folks doing the work are learning. Evolving. Improving. Honing their craft.

If they're not, there's something seriously wrong.

At the end of every project, every major milestone, regardless of how well or poorly executed it was, how close to the predicted ship date it was, how riddled with defects it may have been, there's a chance to look back and reflect on what you learned from the experience. We learn from our mistakes and our successes. We learn what worked, what didn't, what can be improved, and what should be left alone.

This last year has been a whirlwind for me. I've learned a lot. For instance, a product that I thought was stellar when I first built it turned out to have lots of room for improvement. Sure, it was pretty, but it wasn't all that usable. And it was notoriously difficult to debug and maintain. And it was far too easy for users to enter invalid data. I'm pretty much coding in a vacuum here (and not by choice), so I didn't have the benefit of being able to ask my coworkers for advice. There were no Web designers to ask for input; no testers to rely on to catch my mistakes; no other developers to seek guidance from regarding better class models or unit testing methodologies.

But there were the users and their feedback. So I watched them, listened to them, and learned from them. I studied other user interfaces that I admired, and that users were passionate about. I studied other respected developers, designers and engineers. And I applied their philosophies to the subsequent releases.

The lessons I learned were simple, but breathtaking. Simplicity in design invariably leads to simplicity in testing. If it just works, it just works. Smaller, simpler interfaces are easier to test. Pages with thirty fields and all the validators to ensure they contain solid data are orders of magnitude harder to test. But smaller pages that contain fewer fields are faster, easier to use, easier to test, better encapsulated, and once you've determined that they're working, you're done with them. It radically changed my UI designed philosophy. And I'm happy to report that the software is far better as a result of it. It just works.

Every day, someone somewhere learns something that changes their skill set for the better. When things are looking down, and they think they want to get the hell out of Dodge, it helps to remember all the ways in which we've improved.

The learning never stops. The improvement never stops. But how often do we stop and think about what we've learned? Do we ever give ourselves credit for what we've become? Taking a moment to reflect on where we've been is a crucial part of directing our future; we can't just keep stumbling around repeating the same mistakes. So take a moment, look back, and ask yourself this very simple question: What have I learned over the last year?

You might just be surprised.