An interesting topic came up on the ASP.NET newsgroup forums today. The question arose: "Is there a standard naming convention for session variables?" While the debate meandered around the various conventions, and their suitability, and whether or not one even applied to session variables, my chief thought was whether or not one should even see the names of the session variables. Enter the basic premise of information hiding. The ASP.NET Session object is, essentially a keyed collection. (Let's not bicker about it's internal implementation. List, hash table, whatever. It's essentially a keyed collection, accessed by a string key.) My point is that the individual keys used to access the items inside should not be replicated throughout your software. What if you choose to change the name of the key? What if the key's name isn't descriptive enough, and needs to be refined? Sure, you could use a constant, but where do you put it? In a global constants file? That's rather icky. As anyone knows, a global constants file can quickly grow quite large, and navigating it can become a nightmare in and of itself. Then there's the problem of scoping. What if you have two constants with very similar names, and you want them both to have global scope? Now you get to mangle the name of one of them. Good luck choosing. The ideal solution is to encapsulate the session variables in a class that manages them. This isn't as difficult as it seems, and it provides a host of benefits. The class is relatively straightforward, as shown below. Option Explicit On Option Strict On
Imports System.Web.SessionState
Friend NotInheritable Class MySession
Private Const CompanySessionKey As String = "Company"
Private Sub New() End Sub
Private Shared ReadOnly Property Session As HttpSessionState Get Return HttpContext.Current.Session End Get End Property
Public ReadOnly Property IsValid() As Boolean Get Return Not Session Is Nothing End Get End Property
Public Shared Property Company As String Get Return DirectCast(Session(CompanySessionKey), String) End Get Set(ByVal value As String) Session(CompanySessionKey) = value End Set End Property
End Class
A few things to note about this class:
- The class has Friend scope. It can't be accessed outside of the Web application. That's just sound security.
- The Private constructor prevents instantiation. Since all of the other methods are marked Shared, it makes no sense to instantiate an instance of this class.
- The Session property returns an instance of the current session. This property is marked Private and Shared, and completely hides how we're getting the session information.
- The IsValid property returns True if we have a valid session object. This helps us to avoid calling the various properties on the MySession class if there isn't a valid session to begin with. This might be the case in an exception handler.
- The Company property is marked Public and Shared, and is responsible for getting a value out of the session, and putting it into the session. It uses a class-scope constant to reference the session variable, ensuring that both the getter and setter reference the same session variable. Further, the property is strongly typed. When you call this property, the session variable is already converted to the appropriate data type.
Creating this class provides a clean, straightforward way to reference session variables in your code. For example, consider the following (not uncommon) code sample:
lblCompany.Text = DirectCast(Session("company"), String)
Using the class described above, this code is reduced to the following:
lblCompany.Text = MySession.Company
Now, that may not look like much to you now, but imagine what it will save you when you have lots of session variables. And imagine how much easier it will be to refactor those variables should the need arise to do so.
Finally, you can provide code comments in a centralized place that document what the session variables mean and are used for. That, in and of itself, is a huge boon to productivity. A little clarity goes a long way.
And just to show that I eat my own dog food, I use this very class in my own production software. And it does work, and it does save lots of time. I don't have to remember what that cryptic string is that retrieves a specific session variable. Instead, I type "MySession." and IntelliSense presents me with a list of available session variables.
You might be wondering where the exception handling code is. There isn't any, and that's by design. If a session variable is missing in a getter, it doesn't do me any good to catch that condition and rethrow it--I won't be telling the caller anything that the exception won't already be telling them. My exception handling code in the pages and the fallback exception handler in global.asax are responsible for cleanly handling those exceptions.
It also doesn't do me any good to handle the NullReferenceException that will be thrown if I try to reference the Session and it's not there. Again, the global exception handler will take care of it. I could, of course, wrap the exception in a custom ApplicationException, and you always have that option. Then again, I could always perform the check by calling MySession.IsValid before attempting to retrieve any properties, and avoid the exception altogether.
So there you have it. It's not a hard class to implement, but it pays off pretty well. For a little upfront effort, you get a decent return on your investment. Your code's readability and maintainability improve remarkably, and you know that you can refactor it with a high degree of safety. Further, you can document those session variables in the code, close to the statements that get and set them. And if you decide that you no longer want to use certain session variables, you can easily deprecate them by applying the Obsolete attribute to the properties to quickly identify every point in the code that's using them.
So think about it. And if it's worth your time, implement it. I think you'll be glad you did.
Just today, as I was working to deliver a major release on a product I'm working on, I found myself sidetracked by a little project of my own. You see, there's this little problem with one of the data fields in the database. It's not major, just an annoyance, like a four year old poking you in the ribs for an hour, asking repeatedly, "Does this bug you?" Well, it's been bugging me for ages. And I found myself today doing database queries and pasting data into Excel to have Excel build update queries for me using formulas (nice little time saver that is) so that I could include those statements in the SQL script to accompany the next major release. And then it hit me: no one asked for this. It's not included in the test plan for this release. It's gold plating. I'm doing this because I want to, not because the customer asked me to. Whoa, there, cowboy. Get a grip on yourself. Set that stuff aside, and focus on what you need to do, and not what you want to do. There are far more important deliverables to worry about, and you don't have time to waste on unauthorized features or fixes. Especially when those fixes are for issues that don't negatively impact the application. (It was a display issue--first name before last name.) It's just fluff. In reflection, I find myself experiencing these kinds of monumental self-control issues all the time. I get really excited about the things I could do for the customer, and I really want to do them for them. But the truth is that just because I can do something for them, it doesn't mean that I should do it. Any change that I make to the product has the potential to introduce new defects into the system. That's why every change that I make to it must be tested. It's why there's so much testing involved in software. (And if there isn't, something's seriously wrong.) And the testing doesn't just occur here, at my desk. It happens at the client. The product undergoes rigorous user acceptance testing. And testing isn't cheap--it consumes precious man hours, which equates to someone's hourly wages. And if I haven't gotten it right, it has to be fixed and retested. It can amount to massive amounts of money in man hours of testing. Lets not forget the impact that the change has on updating the test plan, the release notes, requirements documentation, and user guides. Plus any associated costs with reprinting and redistributing them. And what happens if the customer decides that my unauthorized change needs to be taken out? What if its impact on the system is so drastically negative that it must be removed? Can it be easily rolled back? And if it must be removed, what are the costs associated with doing so, and republishing all the updated documentation and builds? Are you getting my point yet? The cost of a simple change isn't just what it takes me to code and test it at my desk. That's just the tip of a massive iceberg. It takes a lot of self-control to prevent myself from adding features to a system when those features aren't (1) requested by the customer, (2) included in the project plan, and (3) absolutely critical to the current release. The problem, I think, is that a lot of developers out there, myself included, don't get sufficient mentoring in the discipline of self-control when it comes to software development. For example, we're all hailing the virtues of refactoring code to improve its maintainability, and I agree that that's a good and useful thing. But how many developers know that just because you can refactor a piece of code doesn't mean that you should? How many developers are out there bogging down project schedules because they're busy refactoring code when they should be developing software that meets the requirements for the project deadline? (And here, I will sheepishly raise my hand.) It occurs to me that before I ever modify a piece of code, before I ever touch that keyboard and write any new class or method, or create any new window or Web page, I should be asking myself, "Is this in the project plan? Is it critical to the current release?" If it doesn't satisfy one of those questions, I shouldn't be doing it. The key to getting that product out the door on time is staying focused, and not getting sidetracked by fluff. Take it from someone with experience: it's easy to get sidetracked by fluff. Adding cool features is easy to do, because you're excited about it, and motivated to do it. Working on the required deliverables is hard work; it requires discipline and self-control. You have to stay focused and keep your eyes on the target. (You thought I was going to say "ball," didn't you?) But we, as human beings, don't want to do what we need to do, we want to do what interests us, and what excites us. It takes an act of sheer will to resist that urge, to restrain ourselves, and get the real work done. I would imagine that one of the things that separates a mature developer from a novice developer is quite likely his or her ability to resist that urge to introduce fluff into software. In the end, I think it might be a good idea if programming courses included curricula on self-control as a discipline for developers. And I mean that quite seriously. We need to have it drilled into our heads that we shouldn't be adding anything to the product that only serves our own sense of what's cool or useful. That's not to say that sometimes developers can't predict useful features before the users do; but they cannot and should not be introduced haphazardly into a product: they should be included as a planned feature as part of a scheduled release, so that they can be adequately tested and documented, and not just suddenly sprung upon someone as an easter egg. There's a time and a place for everything. Gung-ho initiative has its proper place; software isn't one of them.
The title of this post might lead you to believe that I'm going to write about bad coding practices. Depending on your point of view, you might be right. If you're a product or project manager, that might be your position. If you're a developer, it might not. But I'm going to ramble for a bit about my personal approach to software development. I basically have two philosophies when it comes to software development: - A software developer's job is not to write software. It is to service the customer. The fact that he writes software to do it is entirely coincidental.
- In order to do his job, a software developer must write code that is so well written and so well documented that he or she can be replaced at any time without impacting the project (or, by extension, the customer).
You may think that the second one is a lofty goal. But a goal should be lofty; it's something to aim for. If it was easily attainable, anyone could do it. Basically, it comes down to this: I'm obsessive about software quality. Not just coding, but the entire process: everything from the initial requirements gathering to the delivery of the product. I have to be; where I work, I'm a one man software development shop. So the attitude that I have to take when it comes down to it is that my job is to put myself out of work. I have to do work that is of such high quality that I am eventually no longer needed. That is what I mean by coding on self-destruct. So my personal promise to the customer is to deliver high quality software to the customer on time within budget. When I prepare a user guide, I know that real people have to use it. They have to be able to read it. I know how frustrated I get when I pick up some piece of software and it refers me to the user guide and its some poorly slapped together rich text file or help file that isn't properly indexed or cross-referenced. So I take the time to make sure that the user guide is up to date, has table of contents and an index, that it has full coverage of the software, lots of illustrations and how-to guides, and everything the users might need to know. I use plain English, not a bunch of mumbo job that only tech geeks would enjoy. The users are not developers. The same thing holds true for the requirements specifications and test plans. You have to know your target audience and use the language appropriate for them. (Right now, however, there seems to be a big push in the industry to move away from what they are scathingly referring to as BUFD (Big Up-front Design). I take issue with the idea that we shouldn't invest large amounts of time up front with the customer to determine their needs and wants. I think that we're treading on thin ice when we slap a system together and foist it upon them without sufficiently planning it out; I don't know that we need to invest up to a year in analysis and design, but I think that we need to spend enough time in analysis and design to ensure that the system's scope doesn't grow out of control, and to ensure that everyone knows what the system is supposed to be and what it isn't. The costs associated with ripping out unwanted features and adding new ones that weren't identified due to a lack of unwanted planning are enormous. Every time you do that, you introduce the risk of requiring full regression testing, system down time for redeployment, reprinting of the manuals, and retraining of the users. It's expensive. Adequate planning can mitigate those costs.) When I write the code, I write it for developers. But even then, I don't have any idea what the experience level of my successor will be. Will he or she have my level of experience? If not, they're going to have a tough time picking up my code and maintaining it unless I make it pretty darned easy to understand and maintain. That means self-documenting code, using a standard naming convention, a consistent coding model, and thorough use of comments. If I get hit by a bus, the company can't afford to have the project come to a stand-still for six months while someone tries to learn what the heck I was doing. It's my responsibility to mitigate the amount of time it takes my successor to get up to speed. So I do that. And any responsible programmer should be doing that as well. The payoff isn't just for my successor. It benefits me as well. I find that my own code is far easier to understand and maintain as I go back in to make defect corrections and add new features or remove those that have become obsolete. It takes me far less time to understand what a particular piece of code is doing and why if I've coded it consistently and commented it, than if if I haven't. Everything that is required to build or deploy the software is stored in the source code repository: source code files, images, build files, batch files that prepare the build, SQL scripts, etc. I also store the release notes (Word documents), the user guide, test plan, requirements documents, and related documentation in it. It's a myth that you can't store these files in a source code repository. They're just binary files, and you'll get full version control on them. I am always looking for ways to improve the visibility of the project. High project visibility gives the stakeholders a sense of involvement; they don't feel like they're sinking lots of money into a black hole and just hoping that something will come out in the end. To that end, constant communication with them is vital. They must always feel like their input is important; after all, it's their baby, and whatever I'm producing is for them, not me. They need to know that it's alive and kicking, that it's growing, a living breathing thing. Emails, phone calls, conference calls, a project web site, and on-site visits with demonstrations of the product go a long way towards keeping them abreast of its state. The customer's sense of involvement is an important part of the software's quality. They provide important feedback throughout the development of the product that will prevent me from making potentially costly mistakes in the design of the product that weren't caught in the initial analysis. Further, their review of the system might alert them to needs that they weren't aware of earlier--some of them critical, and some that can be slated for future releases. We use a Web-based defect tracking system so that our customers can report defects as they find them. We categorize them and tackle critical defects first. Critical defects are those that result in a system crash or data corruption. After that we deal with high-priority defects: those that generate error messages. Next come medium-priority defects: features that don't generate error messages, but don't behave the way they should, but have workarounds. Then it's Low-priority defects: cosmetic issues, such as font problems, spelling errors, and so forth. Feature requests are an entirely different category. Our goal is to have zero defects in the database at all times. If I find a defect that isn't in the database, I report it, and then I strive to address it in the current release on the development server. The smallest defects irritate me. Some folks don't understand why. But they do. When that software goes out the door, it's something that I made, the work of my hands. It bothers me when I know that it's going out there with a defect in it. And it's funny, because I know that there's no such thing as defect-free software. Still, I am reminded of a passage from David Eddings' book, Pawn of Prophecy, in which the young Garion asks Durnik the smith why he bothered to fix a broken part on a wagon. It was in a location that no one would ever see. No one would know it was there, Garion had told him. "But I'll know," Durnik had replied. And that's how I feel about defects. No one else will know these little defects are there. But I do. They may never crop up, but I'll know they're there. And it's my job to get rid of them. I will never be satisfied until every last one of them is gone. Because in my eyes, my promise to the customer, to deliver high-quality software on time and within budget, hasn't been met until those defects are gone. A high quality application, whether it runs in a browser or on the desktop, needs to have an interface that is clean, consistent, visually appealing, and easy to use. It should provide lots of visual cues to the user about the task at hand. It shouldn't leave them guessing about what they're doing. If it's a data entry form, it should provide plenty of immediate data validation that helps them to enter good data rather than fights them in their efforts to do so. It should never make it easy for them to lose their work. ("You are about to discard your changes. Are you sure you want to do this?") It should use plain English or language from their particular domain to describe the tasks at hand, and not technical jargon. An application should never assume that the user's vision is as good as the developer's or the designer's. Too many applications out there rely solely on color to distinguish between records, forgetting about colorblind users. Or they use very small fonts or window/page sizes, forgetting about users that run their screens at very low screen resolutions because they have poor vision. This only creates an unwieldy application that requires lots of scrolling for users of Web applications, or the use of obscure key commands to manipulate windows and get them to move around the screen. I can't cover everything in one post. But I think you get the point. Software quality covers the full spectrum of the development process. There's room for improvement in the entire process. And improving it is an iterative process. You do it, then you review your process, and seek to improve it. With each iteration, you get a little better at it. I don't write software for a living. I work to ensure that my customers can get their jobs done as quickly as possible, and with a minimal amount of hassle. It is purely a coincidence that tools of my trade happen to be a compiler and a computer. All of these processes that I have described are merely ways in which I strive to ensure that my customers are happy, so that I can eventually walk away from that project, knowing that they don't need me anymore, that the project will run just fine without me. When that day gets here, then I'll know my job is done.
I've often been frustrated by the difficulty of testing client-side script in my .NET Web applications. So, being the Google-savvy user that I am, I set out to find a solution and stumbled across Walt Ritscher's post, which I will shamelessly quote here, because I like to have this kind of information handy: - Enable client-side script debugging in Internet Explorer
- Open Microsoft Internet Explorer.
- On the Tools menu, click Internet Options.
- On the Advanced tab, locate the Browsing section, and uncheck the Disable script debugging check box, and then click OK.
- Close Internet Explorer.
- In your JavasSript function add the keyword debugger . This causes VS.NET to switch to debug mode when it runs that line.
- Run your ASP.Net application in debug mode.
I've enthusiastically tested this little tidbit, and determined that it works exactly as he describes. Thanks, Walt! This tip is a life-saver. However, I ran into an interesting problem: when I place the debugger keyword inside an .aspx file, Visual Studio .NET loads the wrong page into the debugger. Instead of stepping through script code, I'm stepping through HTML. It's quite peculiar. If I remove the debugger keyword from inline-script (that is, script that occurs between <SCRIPT> and </SCRIPT> tags in the page itself) and put it inside an included script file, everything works fine. Apparently, Visual Studio is having some difficulty with this sort of thing. The solution, of course, is to write a simple JavaScript include file that invokes the debugger, and include it whenever I want to invoke it. It's a simple (and yet inconvenient) workaround. It also complicates my build process, since it's one more file I have to make sure I remove from the shipped product. But, I am able to debug script, and that's a godsend in and of itself. Just being able to step through JavaScript code, and watch the variables in the local window is more than enough to make up for the hassles of an include file and an additional line in my build script.
This is one long, tragic story. I own a Pavillion m7480n multimedia PC, which I purchased around April of 2006. I purchased it at CompUSA, in an emergency, because I was in the middle of a serious project deadline and my personal computer at home died in a violent, flaming blaze of glory. I needed a computer fast. One that was fast, with a lot of memory and disk space. I had a limited budget, but wanted one that was going to last, and suited my needs as a software developer and a gaming enthusiast. The Pavillion multimedia PC seemed like a good choice at the time, and it has been, up until quite recently. I keep a clean machine, in an effort to get the best performance out of it. I removed all the additional games and Norton Internet Security once the OS was installed. Believe me, these HP machines come with a lot of junk. Norton Internet Security has got to be one of the biggest pieces of junk in the galaxy. We're talking about a piece of software that reduces a 3GHz processor equipped with 2GB of RAM to a crawl. Combine that with the fact that the machine ships with about 30 games, two personal finance packages (both MS Money and Quicken), a trial version of Microsoft Office, tons of media software, extra theming software, and all kinds of excess crap to give you the oohs and ahhs, and you've turned what should be a screaming machine into a clay tablet and a stylus. And let's not forget that HP ships with its own updating software, which is highly intrusive. They also don't provide you with recovery discs, or original Windows discs. You have to create your own recovery CDs off of a hidden partition on the hard disk, and even that doesn't create genuine Windows CDs for you. If you perform a recovery from those CDs, you get the whole shebang or nothing. Which means removing the whole assortment of useless and system-degrading software all over again. Out of the box, the machine has well in excess of 40 processes running, and the mouse often stutters as you drag it across the screen. Needless to say, I do a lot of cleanup when I first get a machine. I remove the software I don't use. Especially tons of little kiddie games. Personal financial software goes, internet service offers, Office trials, Quicken & Money go, and the dreaded Norton Internet Insecurity. Once it's cleaned, I shut down non-essential services, and defragment the hard disk, making sure it's clean. I keep a clean, fast machine. By the time I'm done, I'll have, on average, somewhere around 27 to 30 processes running, very low memory consumption, and a blazingly fast machine. Which is what one would expect. I sit behind a hardware firewall, and keep the machine clean of viruses, adware, and spyware as well. Regular maintenance on the machine keeps it in tip-top shape. Everything was going fine, and the machine was a gem for about 6 months. One morning, I was browsing the Internet, reading the news, as is my normal habit, and everything was peachy. When I came home that night, I moved the mouse, and suddenly the machine froze. The behavior was peculiar. You could move the mouse, and the selection rectangle would appear, but it would not respond to mouse clicks. In addition, it wouldn't respond to keyboard input. I rebooted the machine, and everything came up just fine. But after about 30 minutes of use, it happened again. Reboot and it happened again, after about 10 minutes. Reboot and it happened again, about 5 minutes later. You get the picture. Eventually, it got to where the machine would lock within minutes or seconds of full system startup. Once Windows was loaded, the system would freeze. I could bring up the Task Manager and see the processes list; you could watch as the CPU cycles for each process and their memory consumption remained active right up until the keyboard and mouse "went dead." The CPU consumption and memory usage just froze. Now, obviously the USB port hadn't died--the mouse was still working. Swapped it out to prove it. And the CPU was working just fine. The system works beautifully in safe mode, with a USB keyboard and mouse. I even swapped in a PS/2 mouse and keyboard, and the same issue occurred. Something else was happening. I was fairly stumped. Being a software professional with 20 years experience, that was fairly hard to admit. So, in an act of total humility, I packaged it up and lugged it to CompUSA, where I had purchased the machine. I had paid for the warranty, so I figured I might as well let someone else wrack their brains over it. It was probably something simple, anyway, and I was just overlooking it. So I explained what I had done, what I knew, and gave them my number, telling them to call me if they needed any further information. What a mistake that was. After telling me it would take three days, they held the machine for three weeks. I never got a call. I had to call them and ask them about the status and whereabouts of my machine. At one point, a week into the ordeal, they said it was in process, and should be ready in a day or so. Obviously, that didn't turn out to be true. When they finally called to tell me it was ready, the technician told me that my problem was that I was running too many processes. You can imagine my immediate reaction. I told him, not so subtly, that he was blowing air out of his ass. There was no possible way that that was the problem. It was running the same number of processes that it had been running since I had originally configured the machine 6 months earlier. No new applications had been installed. That could not possibly have been the problem. He argued with me about it for twenty minutes. Finally, I told him that he should just pack up the machine and I would take it to the Geek Squad for a second opinion, and I'd get my money back from CompUSA since they had not resolved the issue. He said, "Well you can take it there, but I used to work there, and they'll tell you the same thing." Personally, I was relieved he didn't work there anymore. It bolstered my confidence in getting a more intelligent response from Geek Squad. Unfortunately, I never got to take it to Geek Squad. The situation got worse when I arrived at CompUSA. I contacted the store manager and explained the situation to him, and told him why I wanted my money back. Naturally, the store manager has no technical savvy whatsoever. So he brought in the manager of the technical repairs department. And he grabbed the kid that had "repaired" my machine. They asked the kid to explain what he had done. This kid had the audacity to speak to me like I was an idiot. He began his diatribe by trying to explain to me the difference between hardware and software--badly. He spoke slowly, softly, like I was a four year old, and he was trying to explain why you don't touch a hot burner on the stove. I stopped him immediately and disabused him of the notion that I had no technical knowledge. I explained to him that I know the difference between a hardware interrupt and a software interrupt. Then I asked him a series of questions. Did you scan the hard disk for errors? Is it properly defragmented? Did you perform memory diagnostics? Are all the interrupts working properly? Did you check for hardware conflicts? Did you perform a virus scan? Is it free of spyware and adware? Did you ensure that all the device drivers were up to date? To each question, he nodded slowly and answered, "Yeah." Then, he explained to me that the machine worked fine in safe mode and that they concluded that it was not a hardware problem. That was fine. I could accept that. But then, to my utter horror, he said that they had performed a system restore from the hidden recovery partition on my machine. "You did a what?" He went on a rapid speech then about why they did it and that restoring the system seemed to have solved the problem. He claimed it was a nondestructive restore. I was flabbergasted. I told them to bring the machine out, set it up, and show me. They did so, and when it booted, you could see all the original software reinstalled, and the applications that I used for work were no longer installed. I asked them to bring up task manager, and count the number of running processes. It was, of course higher than the number that had previously been running. More importantly, I noticed that the additional user accounts that had been on the machine were gone, and so was their data! "You lost my data." They insisted it was still there. I was irate at this point. I wanted to know why no one had called me to ask any questions. They'd had my number for the full three weeks, and no one had called me to ask any questions before they'd taken this catastrophic step. The kid stammered for a second before I looked at him and said, "Look, I'm in software development, and I have to be in constant contact with my customer to do my job. How can you do your job without calling the customer? You had my number for three weeks. Why didn't you call me before you made this kind of a decision?" He couldn't answer the question. Then I asked him if he had bothered to check the graphics driver before he did the restore. I reminded him that in safe mode, Windows uses a default, safe 640x480 graphics driver--not the 3rd party driver that you may have installed. He sidestepped the question, but I was so ticked off now that I grilled him. "Did you check the graphics driver or not?" He sidestepped three or four times before I made him answer it. "Just answer the question. Did you check that before you wiped my machine or not? It's a yes or no question." He finally admitted that he did not check the graphics driver to see if it was causing the problem. "So there it is. You didn't do your job. You erased my hard disk, lost my data, and didn't solve this problem. I can guarantee you that it will reoccur. I want my money back." The store manager disagreed, believing that they had solved the problem. He settled for half the fee back. I counted my losses at that point, because I was ready to do violence. I packed up the machine, and left. (I immediately went down the street to Best Buy and picked up a new laptop for $1300 bucks. I've sworn off of CompUSA. Their arrogance, lack of thoroughness, and the way they handled that whole ordeal has led me to boycott them. I've always had bad experiences with them in the past, but this was simply the last straw. It's sad that they happen to be the only place you can buy Apple computers in my neck of the woods.) Two days later, the problem came back. I restored the system again, hoping it might solve the problem. I wasn't sure at this point if they'd done a full restore or not. This time, I wanted to be sure. Within days, it reoccurred. So there I was, sitting at home with a machine that didn't work. In the meanwhile, I borrowed a laptop from the office so I could continue to work from home, and used it to research the issue. I couldn't find anyone on the Internet reporting a similar issue. Frustrated, I described the problem to the CIO and IT tech at work. Their first guess had been a bad USB controller; that was obviously not the problem. All the USB devices worked fine in safe mode. So they invited me to bring in the machine and let them take a look at it. At first, the were boggled by the problem itself. They'd never seen anything like it. I described the problem to them again to refresh their memory, then explained how they could get the task manager up and watch the processes freeze. Sure enough, there it was. The mouse cursor would move, and the selection rectangle would sometimes appear, but you couldn't click on anything, and the system didn't respond to the keyboard. The only thing you could do was reset it via the power button. (And I abhor shutting the machine down that way.) They looked at it for a day and a half. Finally, using MSCONFIG and process of elimination, they were able to pin down the culprit: Windows Update. Essentially, they disabled every service and startup program, and then rebooted the machine. Then they brought each one online, and waited to see if the problem would reappear. It only reappeared if you turned on Windows Update. And it reappeared every time you turned on Windows Update. Well, that fairly sucked. I need Windows Update. I have to be able to get critical security updates for the system. So I had to do something. We created the recovery CDs, and embarked on a plan. We were going to fully reinstall the system, and see if it worked. On the way home, I go another brainy idea. What if I replaced Windows XP Media Center Edition with Windows XP Professional? After all, I don't need all that extra multimedia stuff. I would remove the multimedia components from the box, and install XP Pro on it. Then I'd have a clean install, without all the crap that ships on the recovery CDs. And it should work. My theory is that the problem is with something that's shipping on those Recovery CDs, or something that HP Update is pushing onto the machines, and breaking Windows Update. Ultimately, Windows Update has to work. So I stopped by Staples, forked over $300 bucks for a Windows XP box, and went home to set up the machine. To my chagrin, the software won't accept the product keys when I run Setup from inside the existing Windows session. Bummer. I call Microsoft, they get me a new CD key, and those don't work either. I call Microsoft back. Apparently, I have to boot from the CD to install, because even if you specify full installation, Windows will treat it as an upgrade if you launch Setup from within an existing Windows session, and you cannot upgrade Media Center Edition to Professional Edition. (Apparently, that would be a downgrade, which is why the keys are invalid.) So I try to run Setup from the disk, booting the machine from the CD. Lo and behold--XP needs SATA RAID drivers. I don't have them, and HP won't provide them for use with Windows XP Pro. Isn't that nice? Further, when you install 3rd party RAID drivers in Windows XP, it expects to get them from a floppy drive. I don't know if Microsoft has noticed lately, but computers these days don't have floppy disk drives. My mom's machine has one, but that's only because it's a number of years old, and we slapped it in because we had one on hand and wanted to close an open slot on the front of her machine that was letting dust into the chassis. (And no, I didn't have a bay cover on hand.) The Microsoft tech support guy on the line had the nerve to tell me to buy one. "You're telling me I have to buy additonal hardware to install your operating system?" "Well, no." "That's good. Because this machine doesn't have anywhere to put one. Now how the heck am I supposed to install a RAID driver on this machine?" So Microsoft decides that the answer is to bring in HP in a 3-way conference call. So they put me on perpetual hold, and finally connect HP. Just as HP is answering the line, my wireless provider (you know, the most reliable wireless network in the country) drops the call. I chalk it up, and wait for Microsoft to call me back. They took my number. They'll do that. Right? Foolish me. So I call them back. I get someone else. Back to square one. They reach the same conclusion. Back on perpetual hold. Get the HP individual on the line this time. I actually hear voices! Yay, a possible resolution is coming. Verizon drops the call again. Now I'm getting ticked. It's midnight, and that's twice. I wait again. No return call from Microsoft. I call them back. I give the case number again. This time I explain in detail that the customer support individual could do a lot of good if he would have called me back. Now I'm upset, and have to spend another half-hour going through the whole thing again, even though it's supposedly all logged in the case file. And you never get the same tech support guy. Never. Ever. But I make it clear that I know it's not his fault, because he's not the guy who didn't call me back. So he goes through the same process, dial up HP, and starts the conference. Again, call dropped. Three times. And does he call me back? Hell no. Note to Microsoft: If you want to provide better customer service, listen to your customers. Wireless companies drop calls. Call the customer back if they suddenly disappear from the line. We didn't wait all that time in line, trying to get our problem resolved, and then just hang up on you. Especially if we've been really polite and helpful. Call us back. What you guys did to me last night, three times in a row, was shameful, and only irritated me to the point where I wanted to curse you up and down. Your customer service department needs a serious upbraiding for that type of behavior. You guys are supposed to pride yourself on excellent customer care. How long does it take to call the customer back and find out if the call was dropped? Now you have to deal with bad PR. And I'm not doing this because I'm ticked. I'm just saying it was something you could have done better, and should be doing better. Making us call back into the queue wastes our time, makes us angry and short-tempered, and makes for a really tough experience for your customer support folks. I know that the phone support folks I talk to really don't like to deal with irate customers. Being proactive about things as simple as this can save you time and money, since it will reduce the stress for both the customer and your phone support personnel. Isn't that worth it? At this point, I'm fed up with dropped calls. I swap out one of the DVD drives with a 300GB Maxtor drive I've got sitting around, and boot off of the Windows CD. Works like a charm. Setup installs Windows onto the machine and it's screaming. Only problem: I have a couple of devices that Windows can't identify: - Ethernet Controller
- PCI Controller
- RAID Controller
- Unknown Device
So I log onto the HP Support site and explain my problem in a chat site. The tech support person there is very helpful and provides me the links to the drivers I need. Apparently,these devices are integrated into the motherboard, and upgrading the motherboard drivers will automatically install the appropriate drivers. So I download the drivers on my laptop, burn a CD, upload them onto the PC, and then install them. No dice. The devices are still not identified. I've figured out that the unknown device must be my sound card, since there's no sound on the machine, and there's no audio device in the Sounds control panel. I contact HP support again. I explain the problem to them, letting them know that the motherboard device drivers didn't work. Suddenly I'm getting a different story. Now, according to HP, you can't install XP Pro on a Pavillion m7480n. Or, rather, you're not allowed to. What you're getting, when you buy this machine, is an OEM version of Windows XP Media Center Edition--it's been modified for their machine. They won't support any other version of Windows on it, and that's why you can't find the CDs for it or the individual device drivers. They want to ship me a new set of recovery CDs. I reiterate that re-running the Recovery CDs will not solve my original problem. Windows Update will continue to make the system freeze. They say, repeatedly, that they are certain that it will fix the problem. I am laughing inside, and bashing my head against the desk. How thick-headed can you be? I have described the problem to you repeatedly. I have told you the steps that were taken. Reinstalling from the recovery CDs does not resolve the issue. The system is now quite stable under XP Pro, except for the simple fact that I can't hit the Internet. And now you want me to return the machine to its previous unstable state? How can you ask me to do that? So, here I am. I have a $300 copy of Windows XP Pro, and I can't connect to the Internet. Other than that, the machine is screaming. I can't get Windows to identify the Ethernet controller or the RAID controller, so this big-ass hard disk in the machine is just useless. And I've got no sound. So what have I learned? - Never buy a prebuilt machine again. I'll build my own in the future.
- Never buy from CompUSA again.
- Never, ever trust your computer to the technical support staff at a computer store. They will make potentially catastrophic decisions about your computer without consulting you, and once those decisions are made, you have little or no recourse. I lost the notes for a novel I've been working on for twenty years. Thank God for backups. But I did lose the last six months of revisions. (Shame on me.)
- Don't use a cell-phone to conduct technical support calls. Use a land-line, or use a chat service. Chat services are better, because some of them will send you the transcript via email.
- Don't expect technical support staff to call you back, even though they take your number. That's largely just lip-service to give you a warm and fuzzy feeling.
And what am I doing now? The only thing I can reasonably do. I'm actually going to try reinstalling, one more time, from those damned recovery CDs. Eventually, something has to give. Even if it means forking over more money for an extended warranty and shipping the whole kit and kaboodle back to HP for service. It just amazes me that I have to go through all this crap in the first place. It's been a serious comedy of errors. Addendum: Reapplying the OS from the recovery CDs worked. I'm writing this blog update from the freshly repaired machine. But I made it a point to not apply any of the HP updates, since I suspect that one of their automatic updates is what hosed the machine. I'll only install critical updates from Microsoft in the future. So far, it's running nice and fast. But I'm still waiting nervously for something to go wrong. Call me a skeptic.
In Part 1 of this series, we took a look at how errors are handled in Visual Basic using On Error. We saw that it has some architectural problems, it doesn't promote developer discipline, it tends to result in code that's difficult to read and performs less than optimally, and that despite all of this it works. I can't stress enough that if your code base is working, you should leave it alone. Learning a new technology will likely get you all fired up to use it. Don't whet your appetite on a functioning and stable code base. If you're that excited, create a test project and learn it there. In the name of all that's holy, don't break perfectly good systems just to test your knowledge. In Part 2 of this series, we'll learn about structured exception handling (SEH): what it is, how it works, and why it's good for you. There are compelling reasons that object-oriented languages use it instead of passing around error numbers and returning error codes from functions. What is Structured Exception Handling?Structured exception handling is, in essence, a fancy term for "error handling." An exception is an error. Specifically, it's any condition that prevents your code from doing its work. For example, you might attempt to open a file and discover that it doesn't exist, or that you don't permissions to do so. These conditions would result in exceptions, since they would prevent your application from working. Some conditions, however, are recoverable. In an attempt to delete a file, you may discover that the file doesn't exist. Here, the file's absence is fine; the user was going to delete it anyway, so you can ignore the exception, and move along. So far, everything sounds familiar, and it should. It's what you've been doing all along. Only the names have been changed to protect the innocent. However, structured exception handling differs from using On Error in that it's object-oriented. It uses objects to encapsulate error information. It establishes a clean, predictable behavior for your application when an exception occurs, and allows you to nest error handlers within the same method (something that was rather difficult to do with On Error). It also provides a mechanism for guaranteeing that certain code executes before your method loses control when an exception occurs, allowing you to clean up your resources, resulting in fewer orphaned database connections, file streams, and system resources. Exceptions aren't raised, they are thrown, and exception handlers catch them. Hence, your exception handler tries to do something, and if your code throws an exception at you, you catch the exception and attempt to handle it. That's the big scary picture of SEH. The keywords used in SEH are Try, Catch, Finally, End Try, and Throw. In a nutshell, you wrap your code in a Try...End Try block. You catch any exceptions in one or more intervening Catch blocks, and you place your cleanup code in a Finally block. For example: Protected Overridable Function ReadFile(ByVal sourceFile As String) As String
Dim reader As StreamReader Dim buffer As String
Try reader = New StreamReader(sourceFile) buffer = reader.ReadToEnd() Catch ex As FileNotFoundException Debug.WriteLine("File not found:" & sourceFile) Throw Catch ex As PermissionDeniedException Debug.WriteLine("Permission denied:" & sourceFile) Throw Catch ex As Exception Debug.WriteLine(ex.ToString()) Throw Finally Disposer.DisposeOf(reader) End Try
Return buffer
End Function
When an exception occurs within the Try block, the run time scans the list of Catch blocks from top to bottom, searching for a Catch block that accepts an exception that most closely matches the type of the currently thrown exception. If it finds one, the runtime transfers control of the application to that block. If one can't be found, control is passed to the Finally block, if one exists. Your Finally block gets the opportunity to clean up resources prior to the routine losing control. If the runtime found a Catch block that handled the exception, the Catch block can do any number of things with it. It may quietly consume it, and then continue processing. Or it, may rethrow the exception. In that event, control of the application is passed to the Finally block, as if no handling Catch block were found. The Catch block may throw a new exception. In that event, the Finally block is called, and then control is passed back up the call stack until a suitable exception handler is found. In the event that no suitable exception handlers are found, a message is either displayed or recorded in the Windows event log (depending on the nature of the application), and the application terminates. Try Blocks Place any code that could throw an exception inside the Try block (that's the portion between the Try keyword and the first Catch statement). The .NET Framework documents the exceptions that it will throw, so you should have a very good idea of what exceptions you should be ready to catch, and what statements need to be placed inside the Try block. Catch Blocks Exceptions are handled in the Catch block. The catch block always takes exactly one parameter, and it must be an exception. You may define multiple Catch blocks; when doing so, always put the more specific exception types at the top, and the least specific type (System.Exception) at the bottom. (The reason for doing this may not be readily apparent: if you put System.Exception any where else, anything below it will be ignored, because all exceptions are derived from System.Exception. The runtime evaluates exception types from top to bottom; once it finds a match, it will stop looking any further. So remember to put the generic exception at the bottom, as a catch-all. Better yet, if you don't know what to do with it, omit it altogether, and let the caller handle it.) In general, you do not want to place any statements inside the Catch block if those statements can throw exceptions themselves. If they can, wrap those statements in exception handlers and handle them accordingly. You are not required to provide any Catch blocks at all. In that event, you must provide a Finally block. This situation is desirable when you want the code in the Finally block to execute even if an exception occurs, but you don't want to handle any of the exceptions that you'll encounter in your method. Finally Blocks The last block you may include in a Try...End Try block is the Finally block. You may only include one Finally block. Its contents are executed after your Catch block code has executed, and before control is transferred out of the exception handler. It's your last chance to do anything before your method loses control. Typically, the code in the Finally block rolls back transactions, closes open files, and cleans up disposable resources. As with a Catch block, it shouldn't invoke methods that can throw exceptions unless it wraps those statements in exception handlers and handles the exceptions. Throw To throw an exception, you use the Throw keyword, like this: Throw New ArgumentNullException("value")
Throw always takes an exception object as its parameter. That's it. As we'll see in a later article in this series, you can create your own custom exceptions, and you can attach additional properties to them (because they're objects) to convey as much information as you need in order to describe the problem. Structured Exception Handling ExampleIn the code sample below, we are doing meaningful work and committing a transaction in the Try block, rolling back a transaction in the Catch block, and disposing of resources in the Finally block. Private Sub InsertEmployee( _ ByVal name As String, _ ByVal employeeID As String, _ ByVal connection As SqlConnection)
If name Is Nothing Then Throw New ArgumentNullException("name") ElseIf employeeID Is Nothing Then Throw New ArgumentNullException("employeeID") ElseIf name = String.Empty Then Throw New ArgumentException("name cannot be empty") ElseIf employeeID = String.Empty Then Throw New ArgumentException("employeeID cannot be empty") ElseIf connection Is Nothing Then Throw New ArgumentNullException("connection") ElseIf (connection.State And ConnectionState.Open) <> ConnectionState.Open Then Throw New ArgumentException("Connection is closed.) End If
Const SqlTemplate As String = _ "INSERT INTO Employee (Name, EmployeeID) VALUES ('{0}', '{1}')"
Dim sql As String = String.Format(SqlTemplate, name, employeeID) Dim command As SqlCommand Dim transaction As SqlTransaction
Try transaction = connection.BeginTransaction() command = connection.CreateCommand(sql) command.CommandType = CommandType.Text command.Transaction = transaction command.ExecuteNonQuery() transaction.Commit() Catch ex As SqlException() transaction.RollBack() Throw Finally command.Dispose() transaction.Dispose() End Try
End Sub
Moving OnIn this article, we've seen the basics of how to handle thrown exceptions, and how to throw them ourselves. Moving forward, we'll cover nesting exception handlers, creating our own exceptions, and we'll dive into when it's appropriate to throw them, and when it's not.
Many VB 6 developers who have made the switch to .NET continue to use VB 6's On Error error handling model. Some continue to do so because structured exception handling using Try...Catch and Throw represents a fairly daunting learning curve. Others do so because the On Error model is still supported for backwards compatibility and the old adage still applies: "If it ain't broke, don't fix it." But there are very good and compelling reasons to learn structured exception handling. Modern programming languages have been using it for years, and Visual Basic has only recently caught on. It's a good thing, believe me. In this post, and the one that will follow, I'm going to try to explain why you want to embrace it, and how it can improve your code. Hopefully, I'll do so in a way that's clear, concise and doesn't confuse you. If It Ain't Broke, Don't Fix It. No, ReallyFirst and foremost, you shouldn't do a massive overhaul of your existing code just to replace the error handlers. And if you think someone's not considering that, think again. If you are, knock it off. Refactoring your entire code base just to replace the exception handlers isn't a good idea. It's tantamount to tearing apart your car just to replace all the screws. If your software is working with the existing error handling engine, leave it alone, especially if you're not familiar with structured exception handling. You do not want to break perfectly good code because you wanted to implement a language feature that you don't fully understand. Get a grip on structured exception handling first, then use it in your next project. As an aside, structured exception handling and On Error can co-exist in the same project. They are not mutually exclusive. Having said that, I don't normally recommend it. Mixing exception handling models tends to create code that confuses maintenance programmers; it creates a mental context switch when they're reviewing your code. Some routines use one model, some use another. This slows down maintenance of the system, and that's rarely a good thing. As you can imagine, I will not be providing advice on how to mix these models in a project. A Review of On ErrorVisual Basic's On Error error handling mechanism, which is still supported in .NET for backwards-compatibility, works in a relatively straightforward manner: Public Sub Main()
On Error GoTo ErrorHandler
Dim x As Integer
x = 5 / 0 ' Force a divide by zero
HelloWorld: MsgBox "Hello, World!" Exit Sub
ErrorHandler: If Err.Number = 11 Then ' Yep, we did this on purpose. Ignore it and continue. Resume HelloWorld Else MsgBox Err.Description End If
End Sub
In some cases, you didn't really want to jump to an error handler. Instead, you wanted to ignore the error, because you could safely ignore it. In those cases, you used On Error Resume Next, and handled the error on the line immediately below the offending line. Visual Basic provides the Err object, which exposes the Erl, Number, Source, and Description properties. These properties are intended to provide enough information for you to find out what happened, where it happened and what component was responsible for it. While the On Error model was straightforward, it was looked upon with fairly universal disdain for one major reason: it involved the use of the dreaded GoTo keyword. While I won't dredge up a pointless debate about its merits, I will point out that it had one particular failing: it tended to create very complex procedures when a function needed to handle many different kinds of errors. Simply put, there was a lot of jumping around. In addition, determining the nature of the error could sometimes be a risky business. In an ideal scenario, you could trust the error number. According to the documentation, the first 1,050 error numbers were reserved for use by Microsoft. An application or library vendor was expected to start their unique error numbers with 1051 and supply a descriptive error message along with the number. This rule wasn't always respected, however. Vendors were also expected to provide the source of the error. But this meant that you had to query at least two properties in order to determine the true nature of an error, because two different vendors might expose the same error number. But it gets worse. Some vendors chose to use the same number for multiple errors, and distinguish between specific errors in the message. Consequently, in order to be truly certain, you had to check all three properties. If Err.Number = x And Err.Source = y And Err.Description = z Then ' Handle the error End If
For many developers, this seemed like overkill, so they chose to parse the contents of the description, since the likelihood of two vendors providing exactly the same message was pretty remote. This trade-off is never a good idea, however. It compromises certainty, and it brings inefficient string operations to the table. I've worked with an immeasurable amount of VB and ASP code in which this decision was made. While the code worked, it was hard to read and therefore difficult to maintain; and the lack of information about the source made it difficult to determine who was raising the error that you were trying to handle. Someone once said that the difference between a professional developer and a hack was the way in which they dealt with errors: A professional developer takes them very seriously, and a hack plays foot-loose and fancy-free with them. I'm not sure who said that, but it struck a serious chord with me, and it changed the way I write software. When it comes to handling errors in your code, you want to be absolutely certain that you're handling the right error, at the right time, in the right way, and if you don't know what to do with an error, you leave it alone so that the caller can handle it. Because of the way that the On Error architecture works, it is sometimes difficult to do this. You have to create lots of "jump-points" so that your error handlers return control to the correct location. If you're querying the Err object like most developers, you're doing a lot of string parsing, which can be quite tedious and creates code that is hard-to-read and performs poorly. After a while, you may start letting things go, and eventually only pay attention to the big, obvious errors. You may even get to the point where you dread diving into the error handling code because it's harder to read than the application logic. Think you've got it bad? What about the next poor developer who's got to maintain that code after you've moved on to bigger and better things? Moving OnSo here's what we've got:
- On Error is a simple, straightforward error handling system that works (usually).
- There's no reason to modify existing code bases that work without an utterly compelling reason to do so.
- On Error does pose maintenance issues, in that it is difficult to read, and tends to discourage developer discipline.
- Error number collisions are quite frequent, meaning that developers have to rely on the Err.Source and Err.Description properties to determine the nature of the error, negatively impacting application performance and maintainability.
- On Error is based on a trust system: it trusts the vendor or developer to correctly populate the Err object's properties, which often doesn't happen as it should.
In the next article in this series, we'll look at structured exception handling (SEH) as the alternative to On Error. We'll look at how it addresses these issues, and provides a superior means of dealing with errors in your applications.
|
|