Programming / Coding Blog

This blog is meant to provide information, thoughts and links that may be found useful to a computer programmer. There is no set strategy or limits of what topics will be discussed.

May 23, 2008

Review: C# & VB.Net Conversion

I'm not sure about others, but I find quite often I need to convert from one language to the other. Now, I feel I'm fairly well versed in both C# and VB.Net; however, VB.Net is more natural for me because I originally started programming with VB 3.0.

I've taken the time to learn and apply some of my knowledge towards C#; and I'll admit there are some very nifty parts to C# that you won't find in VB.Net. But, the same goes with VB.Net having some features that C# lacks. I'm not writing this to debate which language is better; in my opinion both have their strong and weak points, and depending on the situation you may find one language will handle the requirements better than the other. It all seems to come down to personal taste; I've yet to meet a challenge that one language could perform that the other one could not. Now, I've had challenges where performing something was easier in one language over the other, but the other language could still perform the same feat with a little more ingenuity.

This all leads up to the fact that both languages have strong examples for certain features or code samples; and there are times when it is difficult to convert the language over. The other day I picked up a book from Amazon.com called "C# & VB.Net Conversion". To my surprise this book covers .Net 1.1 and earlier; yet, it still very much applies to converting code that is used in .Net 3.5 and earlier!

This book is well laid out, once you glance over the conventions used and table of contents. You can quickly find information for one language and then immediate see the equivalent feature (or method to accomplish this feature) for the other language.

I'd give this book a rating of 3.5 out of 5, only because I couldn't find a more recent version of the book. There have been some changes in both languages that make less than 10% of this book obsolete; so, this isn't the 'solve all conversion problems' book. But, my research indicates that as of this writing there isn't really a book that will do any more than the one I picked up.

I wanted to pass along this finding to anyone who reads this book and has a need to learn to convert between the two languages; it's well worth the time to read to make this process easier. Especially since you can get used versions of this book that are in great condition from Amazon.com for around $6 or less (including shipping).

Until next time, Happy Coding!

May 22, 2008

Virtualization (in a nutshell)

The other day a friend of mine and I were discussing how Virtualization works; to help her out I broke it done to a simple analogy. In fact, I think it will help out most people that are not at all familiar with virtualization. Here is a snippet of the e-mail; some content has been modified to remove personal discussions and to help make the conversation make more sense to someone reading from a 3rd party perspective.

...You’d use virtualization if you wanted to run X number of instances of SQL Server and each instance (or many of the instances) needed to be on its own operating system. Reasons for doing this would vary, some would be because they believe that a dedicated OS will handle a single instance of SQL better than running 2 instances; while others might choose this route because they have 15 server boxes with SQL and to lower the amount of reconfiguring it’s easier to put each box into its own virtual hard drive. Basically, you could use Virtual software solutions to accomplish the sharing (and dedicating) memory with SQL Server and a separate OS. Some people will purchase a single box for their SQL Server 'virtual worlds’ (my terminology there) with maximum RAM and CPU installed, and then set up each virtual hard drive to use enough RAM for that particular instance. It's also worth noting that a 'virtual world' is limited to a maximum amount of RAM as the OS for the 'virtual world' will allow. Even if the server box is using Windows Server 2008 (64-bit) with maximum allowed memory, you still would be limited to 4 GB if you installed Windows XP Home Edition as the OS for the 'virtual world'. You could always tell it to use more RAM, but the OS will only recognize the maximum limit it is allowed (or configured for). This is an important part to understand because some new administrators will try to cut corners (or costs) by using a lesser OS thinking that the Server OS will pick-up the slack; this is NOT the case as the underlying Server OS is simply a host that is allowing the 'virtual world' to run..it has no dictation over the actual running of the 'virtual world', other than the resources that will be made available to the 'virtual world'.

An example would be if you had a 64-bit system with 4 dual-core processors running on Windows Server 2003 with maximum RAM (I think is 32 GB, I could be wrong on the max amount). Now, let’s say your business wants a SQL Server instance that holds valuable company information (such as Human Resources records, accounting records, employee personal information, etc). In this case you’d probably want to make sure this instance of SQL Server is not on the same machine that your users (rather it be employees of the company, or customers of the company) are using. So, instead of purchasing a separate box for it, you could use virtualization to accomplish this feat (but as with anything else there are security concerns and procedures that should be followed when doing this). You would simply create a new virtual hard drive (‘virtual world’) that contained this SQL Server instance. And when creating this ‘virtual world’ you can specify how much RAM can be used; and this would be determined by the amount of data flowing through the SQL instance and the number of users. What’s really ingenious about virtualization is that you can go into an existing virtual hard drive (I believe even while it is running) and tell it to use less RAM or other resources, without ever shutting down the server. So, let’s say on the same box your primary business SQL instance is running and using up 28 GB of the 32 GB, and the other 4 GB is reserved for the Server OS use. Well, you just go into the first virtual hard drive and tell it to use 24 GB, this now frees up 4 GB; which you would then tell the virtual hard drive being created for the company sensitive info (SQL instance) to use that 4 GB. Then just monitor both ‘virtual worlds’ for a while (maybe a couple days or weeks) and adjust the RAM as you see fit, as well as any other resources.

As you can see, virtualization can be handy. But, there are also drawbacks; in which Windows Server 2008 addresses. I think Windows Server 2008 did a very good job at addressing the issues, but this is my opinion. The basic drawback is memory allocation/sharing; I’ll cover that in a second. If you are interested in this type of scenario (using Virtual Hard Drives) and see a business requirement that you can fill with it to improve your systems then you’d want to read into Virtualization, and look at applications like VMWare or Microsoft’s Virtual Server. I’ve heard VMWare is very good, especially in cases where you are consolidating 100s of servers into a few servers. VMWare, I believe, was the first true virtual environment..I could be wrong on this; I don’t know the whole history of virtualization. Personal experience has been with using Microsoft’s virtual server, it’s fairly simple to setup and use. I chose to go with MS over VMWare because I personally feel that MS would design their Virtual Server to utilize every nook and cranny of the Windows OS; I’d honestly look into using VMWare if I were running an OS that isn’t MS based...

...Hyper-V is a term that you will want to familiarize yourself with if you are getting into virtualization. Hyper-V is a concept created by Microsoft that addresses memory allocation within the OS to improve the performance of virtual hard drives/OS. Hyper-V in particular was designed to be used with Virtualization; it cleans up a lot of the memory issues that occurred with Windows Server 2003 and earlier versions. People would see problems when running 15 instances of virtual hard drives. Mainly that their expensive RAM wasn’t worth any more than common RAM picked up at a local computer shop! Microsoft’s solution was Hyper-V! If you ever get bored and want to geek out, read up on Hyper-V. The concepts are actually quite interesting, and if you’re not familiar with Virtualization you will still see how Hyper-V improves virtualization. Amazing stuff!

The basic idea, in a nutshell, is that prior to Windows Server 2008 the virtual hard drives would compete with each other and the OS for memory. Think of memory like a catalog file in a library. If you want to find a book, you need to open a drawer find out the location of the book, then go to the book. Now, if you want to find another book, you’d have to close the open catalog drawer; then open another catalog drawer and find the location of that book. Now, let’s say there is only 5 catalog drawers and they hold all of the locations of all books in the world; and let’s even say that everyone knows that these catalog drawers has this information. So, you might be looking for a book on virtualization; and the person next in line wants a book about Vacuums. They obviously can’t look in the catalog at the same time as you because there isn’t enough room to move the catalog cards around so both people can view it. So, the next person MUST WAIT for you to finish; then when you are finished and leave to get your book they can look for their book. Now, what if you figure out that you wanted to look up something about SQL. Now you must WAIT for that person to finish looking in the catalog, PLUS anyone else that is in line before you returned.

This is how memory would work also, it is more of a linear type of thing. So, the OS would be accessing part of the RAM, then Virtual Hard Drive (VHD) # 1 is accessing a part after the OS is done, then VHD # 2 would access another part. So when switching between VHDs you’d run into WAITS for the OS or VHD to complete its accessing of the memory. First attempts to resolve this problem was to divide memory into sections; but regardless the OS alone uses up so much room for memory that the VHDs would have to compete just to get enough memory to run; let alone to actually perform any operations (such as just logging in to the VHD).

To resolve this MS introduces Hyper-V; which basically will hold references or copies of memory in a temporary memory table. So when VHD #1 is done, the accessed memory isn’t actually released, it’s stored in a temporary table just in case VHD # 1 needs to come back to access it again. This is the same with the OS and VHD #2; this doesn’t necessarily resolve every potential memory sharing issue…but it sure does greatly reduce it! Going back to the catalog analogy, if the section of catalog cards you were looking at was copied and then placed into a separate temporary box and you realized that you wanted specific topic under Virtualization; you could go to the temporary box of catalog cards that held info about virtualization books. This obviously would save you some time because you don’t have to wait in that long line to access the original cards; it would have its own limits also (storing some cards in boxes); such as it would take additional time/effort to copy the information, you must find a temporary box (or create one), try to keep others from not using the box, and this only works if you are searching for a book the is relatively similar to your original book. If you want to search for a book about car engines, then obviously you still have to get back in line to access the original cards. See how there are some drawbacks?

That’s about the best I can do at explaining this subject without getting into great technical details. It’s quite an amazing concept (virtualization) and can help businesses that use lots of databases and servers. Hardware technology, and Operating System technology, are quickly adapting to handle virtualization more elegantly. It will probably be a few more releases before we can truly start seeing virtualization performing near its peak of potential; but, now would be a good time to familiarize yourself with the topic while it’s still being developed/refined. As with any other technology, the longer you wait to learn about it, the more complex it becomes which means the more confusing it will be to learn. Imagine if you learned about SQL Server when it was simple, just a database with commands (before all of the DTS, SSIS, etc); then now you’d only have to learn about the new options offered in SQL Server. Imagine if you were to walk into using SQL Server today for the first time ever, and told to run a multi-national company’s data structure..that’d be a daunting task because there is just so much to learn with SQL Server alone!

I hope you get a lot of information out of this!

Until next time, Happy Coding!!

March 6, 2008

Should you use old or new technology?

Overview:

With the release of Visual Studio 2008, Windows Server 2008, and SQL Server 2008 (in 3rd Quarter of 2008, as of time of this blog writing) I get asked if I will be utilizing the new technology or staying with the old technology. This question has some thought to be had before deciding; and for most others the thoughts can be very complex. I'm going to cover the thoughts that an everyday programmer and administrator commonly come across for a small-medium sized business. I am not covering Enterprise structures because most have some sort of a business IT roadmap/plan they will follow and require much more authorization and testing than those of smaller businesses.

Understanding the complexities involved:

First to make the best decision the IT admin/programmer will need to understand the overall objective of the business, the requirements that are expected to be met, and foremost the actual practicality of the business needs.

An example is a small sized business with less than 20 employees and probably only having 1 IT (do-it-all) person this can be a much more simple decision. This type of scenario will typically involve very few custom made applications, typically applications that can be upgraded using built-in upgrading wizards and, in most cases, will have limited compatibility problems with Operating Systems (i.e. most, if not all, systems are 1 OS version like Windows XP Professional). In this case, let's assume the IT personnel has been authorized to make the decision to upgrade on their own and financially given the 'green light'. So, what do you do? First, what OS's are being supported? In this case, Windows XP (no Vista…yet) is the prime OS. Next, how about servers being supported? They probably have 1 or 2 servers, maybe 1 server using Windows 2000 and 1 using Windows 2003; and a third server that uses Windows 2003 to backup important files on a nightly basis. Next, how many applications are custom made (i.e. using a programming language in-house, such as C# or Visual Basic)? Maybe ½ a dozen custom apps exist. Next, what programming language was used? Applications were probably developed using Visual Studio .Net (any release) with Framework 2.0.

Another example would be a medium sized business with 200+ employees, a small IT Department of maybe 10 people. Going through the same line of questions would maybe result in finding out that some employees are on Windows XP and some are on Windows Vista. You may also find out that you have more than a couple dozen custom applications and they were produced on programming languages ranging from Visual Basic 6.0 to Visual Studio .Net 2005. They might have a small server cluster with redundant clusters.

Deciding what to do:

In the first scenario, with the small business, the option is quite simple; you can choose to if you want to but don't have to because of the low amount of custom applications and no immediate worries of the OS changing. In my personal opinion I would go ahead and make the conversion to the new released software versions when possible; this will minimize the headache of upgrading at a later time when you no longer have a choice. There are two caution areas here. The first is the custom applications were developed with the intention to run on Windows XP; this can require more time to upgrade to run on Vista, but with the low amount of supported applications this can most likely be upgraded using a wizard and then revised in person over a small amount of time. The other caution is upgrading from Windows Server 2000; this one would be a strong candidate for being upgraded as soon as possible for two reasons, 1) The longer you wait the larger the leap in OS versions which means the more chance of problems, and 2) Windows Server 2000 may become unsupported during this year which will result in retrieving support via websites only and getting vital Security Updates…no more patches for bugs, and especially no more Service Packs!

In the second scenario, with the medium sized business, the option is much more difficult. First you must take into consideration is what to do with the legacy applications (referring to Visual Basic 6.0). This requires quite a large amount of conversion and will be tedious and require custom upgrading of the coding. The next consideration is the servers; how to do the migration path and to test the migration before implementation. Then comes into factor is the support aspect for the older Windows 2000 servers; with this slated to be no longer actively supported and fixes only limited to security (unless you have an agreement with Microsoft for continued patches) then this starts to become a race against the clock or a gamble each day for the security of your system (not that even using the more current versions don't have their own security gambles as is). In this scenario I would go ahead and make the switch to the latest programming language because the upgrade path is minimal for most applications made with Visual Studio .Net; and immediately find a solution to the Visual Basic 6.0 applications. I would probably at minimum upgrade the Windows Server 2000 system because the thought of minimal support from Microsoft is scary; already hard enough to get patches for the products that are fully supported, I can't imagine the problems with a product that is no longer supported in the mainstream.

What about SQL Server 2008?

This was not addressed in the above scenarios because it is pretty much the same decision process for both cases. The first question is do you want to have the latest technology, and can you risk (for the more immediate future) a new released software version. Most people have already determined to wait until Service Pack 1 is released before even testing the new technology. For the larger organizations that might already have 1 or 2 SQL Servers in production environments will most likely want to avoid upgrading those due to complexity of upgrade testing and path; and I'm the believer of if it isn't broken then don't fix it. I'd personally rather use SQL Server 2008 for newer developments, then start the migration process for the older databases as needed or opportunity presents itself. You may find one day that you have to do a complete restore (I hope you never do) and then at that point you might just decide to restore it on SQL Server 2008 instead and deal with compatibility issues at that point (this is assuming you already tested the upgrade requirements and functionality previously).

So, what do I do?

Well, it all comes down to two simple facts:

  1. Are you willing to put in the time to test current compatibility and perform upgrades manually where needed?
  2. Do you want to get the upgrade done now while you have a smaller upgrade path, as opposed to waiting another 5 years?

If you answered NO to either of these questions then stay right where you are and keep your fingers crossed that the next version of these software titles remains simple in upgrade paths. If you answered YES to either, then determine your needs and start testing before actually upgrading (there are changes and some you might not like and others you may love...try to be prepared to read/learn about the new features in all 3 products). If you answered YES to both questions, then you will want to start testing immediately and pour in your resources available to you; this will be a bumpy ride in the beginning, but will get much smoother as the dust settles and you get familiar with the new technology.

Conclusion:

Deciding whether to upgrade or not is not an easy decision to make by any means, I have only addressed two common scenarios and no one persons scenario will fit perfectly into the above scenarios. Also, these are my personal opinions and thoughts on the matter; I'm in no way a guru in this field of subject and would caution you heavily to do all the research you can and carefully plan your steps. Don't be afraid to hire a consultant if you are confused on what the best path to take is. Microsoft works very closely with many consultants and vendors in preparing them for this upgrade and handling many, many different scenarios.

In my overall opinion I feel that if you can spend the time and efforts then upgrading to the current/new technology is much better than sitting on the old technology that you feel is comfortable and works for you; before you know it you'll find that technology has just blown right by you and now you don't have an upgrade as an option and will require either rebuilding your structure or hiring someone to come in and fix everything and deal with all of the headaches (I know this, because I have been hired on with a company to fix their 10+ year old technology problems…and it really is a headache because of the drastic changes in technology architecture and methods)!

February 27, 2008

Blog’n Again…

Well, it's been a little over 6 months since my last entry for this blog. Life's been a challenge and required my absence from this blog (and my other blogs).

Now things have settled down. My health is stable, my career has become stable. Now, it's time to get this blog back to being stable! I hope to be able to continue along the path I was headed with this blog.

You may, or may not, know that Microsoft is in process of releasing its 2008 versions of SQL Server, Windows Server, and Visual Studio. I will remain focused on the 2005 version for the most part; however, I will try to touch upon some 2008 new features and enhancements. This blog may evolve to covering the 2008 version as they become more main stream…and as I evolve to using those versions primarily. As with any new products being released, I will go through rigorous testing and validating before I convert over.

My current objective is to write an entry for this blog about once a week; however, it could be more or less. I hope to not go any longer than two weeks between entries from here forth.

Until next time…Happy Coding!

April 10, 2007

A brief look at Garbage Collection

Garbage Collection

What is it?

Garbage Collection (GC) is a built-in feature of the .NET Common Language Runtime (CLR). The purpose of Garbage Collection is to free managed memory resources. The reason this has come into existence is that in prior windows versions the programmer was responsible for releasing allocated memory resources when appropriate. Resources are most typically used from the creation of objects ( i.e. using the keyword "new" operator). The problem with this was that most developers (especially the hobbyist) would not properly clean applications (and their objects) from the resources; another issue had been that if an application had failed and pre-maturely terminated then the resources would remain in use until the memory was specifically told to clear the resources. The developer of complicated applications that would use a vast amount of resources had to create methods to determine if a failure had occurred and have the memory cleared prior to restarting the memory intensive application.

 

Microsoft's answer was to create the Garbage Collection class. This class would periodically scan the current allocated memory resources and determine if any resources were in use by programs that have already been terminated (rather it be intentional or accidental). Should the Garbage Collection find a used resource with an invalid application reference it would then clear the resource to free up what could be a valuable resource.

How do you use it?

Microsoft recommends that you do not use the Garbage Collector class directly; however, they have left the class open to be used in the few and rare instances it may need to be called. The reason Microsoft recommends against its use is that the Garbage Collector is an automated task performed by Microsoft Windows through the .NET Framework ( 1.1 and higher). The primary reason not to use the Garbage Collector class is that each time the GC is called it will suspend all active threads. This can cause serious degradation in performance should the GC class be called too often.

 

Since this is an automated feature you may ask why you would even bother to learn about this feature. The primary reason is that since you won't control when this feature is initiated you could inadvertently lock your application (or absorb large performance degradation). A prime example would be an application that supports multiple data connections through an SQL adapter. The default SQL adapter uses a pool of 10 connections. It is conceivable if you were to use all 10 connections in your application and not properly release the connections from memory, and then call another connection into memory that you would have to wait until the Garbage Collection had occurred to clear the memory of these connections, even though you have already closed the prior 10 connections before calling the additional connections!

 

So, when is an instance you would want to call the GC class, and how would you call it? One instance would be the above example of utilizing a large pool of SQL connections and releasing them all at one moment. Another time you would want to call it is if you have an application that uses a document that contains a large number of references using unmanaged memory resources and your application closes this document; you then would know that all of the resources should be released since the document no longer exists (suspension of threads referencing the document would not effect your application; however, still use caution in case of other threads are still in use in your application). You would then want to call the GC.Collect method. Again, you will want to ensure this is placed in a block of coding that is limited in calls, preferably called only once. A great example would be to place any method calls to GC.Collect in the form's Finalize class-method; you would "override" the finalize method to include the GC.Collect class and proper calls.

 

What can I do to release memory resources, without calling GC.Collect?

It is best described in the MSDN webpage titled "Garbage Collection". I will use a single method to describe how to release a memory resource; however, keep in mind that there are a few different ways and many of them shall depend on the chosen programming language. Such as C# uses additional features for releasing memory that cannot be used in Visual Basic. You can read more detail at: http://msdn2.microsoft.com/en-us/library/fs2xkftw(VS.80).aspx .

 

You can simply call the Dispose() method directly from your application. An example in Visual Basic would be:

Public Sub Close()

' Calls the Dispose method without parameters.

Dispose()

End Sub

 

An example in C# would be:

public void Close()

{

// Calls the Dispose method without parameters.

Dispose();

}

 

These two methods show a very simplistic method of disposing of your object; however, you will want to review the above mentioned website for the preferred patterns and practices that will ensure proper disposing of the object by implementing the IDisposable class.

 

This should get you started and should help you to understand how to make your programs less susceptible to degradation from memory resources not being properly released.

April 4, 2007

What is .NET?

This blog is merely an attempt to help you understand the .NET Framework. It will not cover in detail how to use the framework or what is contained within the framework. It will provide links to locations that will provide this needed detailed information.

 

To understand the history of .Net we need to understand how Windows operates. To utilize the classes within windows (pre-.Net) the programmer would need to access Application Programming Interfaces (APIs). These would allow the programmer to access specific classes that were designed to interact with the Microsoft Windows Operating System.

 

An example would be that the developer would need to access a CD-ROM from the CD-ROM Drive. The developer would first reference an API wrapper in the application and then provide coding that would access the drive. The beauty of this method, back then, was the developer didn't have to know how to create code that would directly interact with the mechanical and electronics of the drive. All the developer needed to know was how to initiate the API for accessing the drive, what the API would return as a result (success/fail of accessing, information about the drive, information contained on the CD, etc), and then the developer needed to know how to handle these results.

 

Now all of this sounds simple when you look at the overview of what needs to happen; unfortunately, in practice it was not as simple as it is today. The largest problem the developer would face was learning all the arguments needed; the exact location of the desired classes to instantiate, and many other problems. Microsoft had attempted to increase the ease of the development methods by introducing COM, GDI, and many other things. This was Microsoft's beginning to implement OOP concepts (For more information on OOP, see our earlier blog, "What is Object-Oriented Programming (OOP) and why do you need it?").

 

To lessen the burden on the developer, increase the ease of software developing, and to increase the power of the applications developed, .NET Framework was created. .NET Framework had initially started as 1.0.

 

.NET Framework is essentially a large collection of classes that are, or can be used, in conjunction with the Microsoft Windows Operating System. Some of the classes contained in the framework are: Microsoft.VisualBasic.FileIO, Microsoft.WindowsMobile.DirectX.Direct3D, System.Collections, System.Collections.Generics, System.Data, System.Data.SQL, System.Data.Design, System.Security. The complete list is exhaustive; and the previously mentioned framework classes are a small sample of what you could find. You will want to visit the .NET Framework Class Library Reference website to see a complete listing.

 

.NET Framework is a collection of components. It models the method(s) in which an application can operate with the Operating System. The core of the framework is the Common Language Runtime (CLR) model. This model's principal concept is that it uses 'managed' methods to interact with code and data. Managed Coded and Data provide a safer method of application programming. Some other components within the .NET Framework are: Base Class Library (BCL), Metadata/Intermediate Language (MIL), Common Type System. These are just some of the models and is merely an introduction to the .NET Framework. You can visit the .NET Framework Developer Center to find further information on how the framework is placed together, why it is there and especially how to leverage its current technologies.

 

By visiting the .NET Framework Developer Center you will be able to access a huge amount of information; in particular whenever you run across a section that shall cover the "best practices" for a subject, you should immediately bookmark it. These "best practices" sections will help to eliminate bugs, increase productivity, increase reliability, and most importantly increase the security of your application!

 

A thought that should constantly be kept in the back of your mind is that the .NET Framework is very robust; because of the robustness it is always being improved upon and new versions are being developed. .NET 2.0 is currently the framework in active use (at the time of this writing); however, .NET 3.0 has been officially released and will shortly become the main stream framework with the production of applications that will utilize the power and security of the newer technologies found in Windows Vista Operating System.

 

The one thing this blog is attempting to do is to help you understand that the .NET Framework is integral to programming regardless of using any of the .NET programming languages ( i.e. Visual Basic.NET, Visual C#.NET, etc). You don't need to have a thorough knowledge of the subject, but you will eventually need some basic understanding of the concepts and how to use it.

March 11, 2007

Can we get a book that covers something that can't be found in 100 other similar books?

I just got done reading a great book! Believe it or not, this book was called, “An Introduction to Object-Oriented Programming with Visual Basic .Net” by Dan Clark (APress, 2002). What would make this such a great book? You may even be more curious because it’s an introductory topic book.

I think it is great because there is an impressive section in regards to the pre-designing of software. In particular this book covers the proper methods in determining what a client’s needs are and how to go about gathering all the information and putting it in an organized fashion so that a developer (or a team of developers) can follow a blue print of the desired application for the client. I have read dozens upon dozens of books in regards to programming; and I can tell you that more than 75% of the books aimed at beginner and intermediate programmers do not cover the topic of how to plan a software development project; and just about the remaining ones have a very slim section that introduces vague ideas and methods. This is the first book I have ever read that actually went into detail and provided case studies on a method to collect and analyze data to determine what the proper course of actions are to develop the software. This book introduces the concept of the Unified Modeling Language (UML) with decent detail; the subject is a very large subject and Mr. Clark had done a wonderful job with introducing the reader to this subject.

A triple kudos to you Mr. Dan Clark! Please continue producing publications that cover topics you can’t find in other competing books!

Another thing that has really bugged me for the past few years is partly due to Microsoft and partly due to the authors of books. I dislike that Microsoft had changed the methods to programmatically print; I felt they should have left a backwards compatible printing method. Don’t get me wrong here; I understand, and am pleased, that Microsoft had provided a much more powerful printing method. However, trying to convert old applications and even just adjusting and relearning how to print was a whole headache of a task. Which brings me to the authors of these helpful publications; how come no one, and I mean absolutely no one, has produced a publication that goes into details on how to print programmatically and cover the ins and outs of printing? I have managed to find a few books that almost mention printing has even changed, and of these few books none have provided more than a few pages of discussing there was a change and absolutely none (0) have a chapter on how to print! How is this possible?!?!?! If you read the commonly used message boards (such as MSDN, DevX, etc) you will find hundreds of questions on printing and most people are frustrated with the inability to easily print a multiple line text document. In the old print methods there were simple ways to determine the edge of a print line and if the word currently printing was going to get cut off and if so to have the word print on the next print line. Now you have to calculate so many factors, such as the page length, the text size, the font, if any of the text is bold, what the margins are and yet a few additional other things! This all comes into account just to decide if the current word being printed will get cut of in the middle of it or not.

My point on the lack (and I think "lack" is a nice way to put the issue) of books covering printing is that if an author actually takes the time to create a book on this topic they could make a huge profit! There is no competition and programmers need this type of a book. You don’t need to take my word on the need for this publication, just take a quick glimpse on MSDN’s community forums and do a search on the keywords “VB” and “Printing” and view only 25 posts. You will quickly come to a conclusion that there is a void in the printing subject and filling it would be beneficial to the programming community and, especially, to the author and publishing company!