In-Depth

Virtualization, The Implementation

Getting your virtualization effort is more than just flipping manual pages and following the steps. Here's how to get from P to V in no time.

In the first two parts of our series on virtualization, we talked about the things you need to consider before you ever start your rollout. In Part 1, we discussed some things that could prevent a physical server from being a good virtualization candidate, like attached peripheral devices or a bad performance profile. We then dovetailed that information into Part 2, where we talked about some of the virtualization products on the market that you may or may not have considered.

Let's now discuss your virtualization implementation. Why is the implementation so important? Don't you just print out the administrator's guide and flip to the chapter on installation? Yes, but the engineering of a good virtualization implementation involves so much more than just clicking "next." There are three critical components of any virtualization rollout you should take advantage of if you're to get the biggest bang for your virtual dollar: Your P2V tool, your backups and your disaster recovery plans.

I Heart P2V
P2V, which stands for "Physical to Virtual," is an acronym that gets thrown around a lot in the virtualization space. P2V is the process of moving your server instances off of their existing physical platforms and into the virtualization environment. There are numerous tools on the market that can perform this task, and we'll talk about them in a minute. But first, what exactly is the P2V process?

Think of the composition of your servers. They're little more than a series of files on a hard drive. A few of those files make up the registry of your computer and the driver sets that run the physical components. In the virtualization environment, it is really no different. For most virtualization solutions, those files that make up your computer are instead squished together into a single file. The only difference between the physical and virtual machines is the drivers needed to run the physical equipment. All the P2V tools on the market do is move those files from the physical server into the virtual server's file and change the driver sets.

The installation of the virtual server product is typically very quick and easy. You can get an initial build up and running in 20 minutes. The time-consuming part is in P2V'ing your existing servers into the virtualization environment once it's created. Depending on the speed and congestion of your network, the size of the drives on your physical machines, the speed of the hosts in the virtualization environment and the type of P2V software you purchase, that process can take between three to five hours or more per physical machine. That means a lot of late nights.

A number of tools are available that automate this process. VMware has VMware Converter that allows for a single, no-cost P2V for each installation. A purchase is needed if you want to do more than one. PlateSpin, LeoStream and Acronis also have enterprise-worthy tools for P2V'ing large numbers of machines. Some freeware tools also exist like EZP2V for BartPE and Ultimate P2V. Or, you always have the option of rebuilding the server instance from scratch inside the virtualization environment. If a physical server exhibits behavior that makes you doubt the stability of its configuration, a complete rebuild is often the best idea. As they say, garbage in, garbage out.

Backups Both Ways
Once you move to virtualization, backups get slightly more complicated. The compression of a server's files into a single file at first blush makes the whole process seem really easy. You back up that server's disk file and you're ready to go. If your machine crashes, the restore involves a single file rather than the millions of files that make up the server.

For servers that store static data -- data not housed in transactional databases -- this is an excellent mechanism for getting the server back in line in a flash. Modern applications for virtual machine disk backups use snapshot technology to quiesce the file before backing it up. This ensures that the restored file is what we call "O/S consistent," meaning that the O/S will successfully come back alive once restored. However, sometimes the database residing on that server may have been in the middle of a transaction or other activity. In this case, when the disk file is quiesced by the backup software the disk file is consistent but the database is not.

Databases like Exchange, Active Directory and SQL can come back after a restore in an inconsistent state. In those cases, you should consider also incorporating system-level backups inside the virtual machine like Symantec Backup Exec, Veritas NetBackup or the native NTBackup to back those databases up separately. These tools incorporate agents that ensure the database itself is quiesced before the backup starts. Some backup tools at the virtual machine layer are beginning to develop capabilities to do this, negating this extra step, but these tools are still forthcoming.

In either case, as you plan your virtualization rollout, take a look into tools like VMware VCB or Vizioncore esxRanger to do backups from the virtual server level in conjunction with your backup clients inside the virtual machine itself.

DR Consistency
The National Academy of Archives and Records states that 96 percent of companies that lose access to their data center for 10 days or longer are out of business within a year. It's the fear of data loss that keeps many business owners and IT managers awake at night. But until recently, the creation and management of a full backup site for disaster recovery was out of the realm of possibility for many businesses. With virtualization, that need no longer be the case.

The same problems with database consistency hold true when considering virtualization for disaster recovery. Many companies move to virtualization because of the added bonus of easily and inexpensively bolting on disaster recovery capabilities. Once the virtualization environment is installed and all systems are P2V'ed, the virtual server-level backup software can very easily move those backups to an off-site data storage location. If you ever lose your primary site, restoring its servers is as easy as connecting them to another virtualization environment at the backup site and "pressing the green VCR button" to power them up. Right?

That's partially true. The problem with database consistency is exacerbated in a DR situation. If you're not using quiescing tools like those discussed in the previous section to snapshot your transactional databases before they're backed up and shipped off to a remote site, a restoration may not be that clean and easy. So, when considering disaster recovery plans in your virtualization rollout, make sure that you also consider the same problems as with backups.

Some tools are available that make the DR process quite easy. Among others, two tools can provide automated shipment of virtual machine backups off-site while still enabling consistency. Double-Take Software has a product called Double-Take for Virtual Systems that will automatically do the quiescing, the backup and the shipment of the backups to the remote site as often as you want. Some of the neat features of the Double-Take product are the ability to ship only the changes across the wire, reducing your bandwidth needs. Also with their software, you can store up to 64 incremental backups and roll back or forward to any of them during a restore event. This is useful in the case where your primary site gets hit with a massive virus attack and you want to roll back to just before the attack started. Double-Take for Virtual Systems, however, does not currently have the capability of quiescing databases inside virtual machines. So, for those machines, you'll need to also purchase Double-Take's host-based application, Double-Take for Windows.

Vizioncore has a tool called esxReplicator that has a similar feature set to Double-Take's product. It can perform many of the same functions as the Double-Take product. However, Vizioncore has mentioned that they are working on adding database consistency into their product in a future release.

Either product enables your business to stand up a virtual "warm site" at a remote location ready to be powered on in the case of a primary site failure. This reduces the overall cost and management headache of supporting the remote site, as changes to production machines are automatically replicated as they occur. The costing on these products also brings real DR designs into the realm of possibilities for even the smallest networks.

Virtually Wrapping it Up
So that's our series on virtualization. We've taken the time to understand how performance management weighs heavily into the success or failure of your virtualization investment. We've talked about the contenders in the market, and how the hype may not necessarily drive you to the solution you think you need. And in this, our third installment we've worked on the add-on products for P2V, backups and disaster recovery that enable a holistic solution for systems management and protection.

The next steps are up to you. Getting through your virtualization assessment, determining the best product to suit your needs and adding in the feature sets needed by your business are all critical parts of your virtualization project plan. And more than a few late nights watching servers finish the P2V process while catching up on all the old reruns you've been missing.

About the Author

Greg Shields is Author Evangelist with PluralSight, and is a globally-recognized expert on systems management, virtualization, and cloud technologies. A multiple-year recipient of the Microsoft MVP, VMware vExpert, and Citrix CTP awards, Greg is a contributing editor for Redmond Magazine and Virtualization Review Magazine, and is a frequent speaker at IT conferences worldwide. Reach him on Twitter at @concentratedgreg.

comments powered by Disqus
Most   Popular