Wednesday, February 16, 2011

Fragmentation Prevention and Virtualization

From a long-term viewpoint, virtualization has always been a primary goal of computing. The extreme of this term, of course, would be virtual environments, such as those found on the holodeck of the fictional starship Enterprise, where entire worlds can be virtualized at will. Although R&D labs are secretly working to get us closer to such a scenario, we’re not there yet and probably have a ways to go. What we do have, though, is something almost equally as fascinating from a technical standpoint: virtual computers.

We have now reached the point where a user can actually generate and launch a virtual server within minutes. The server shares hardware resources with several others, but exists as its own whole and independent computing entity. Advances continue to be made; it has recently been announced that such virtual environments can be rapidly moved to a third-party cloud, with replications of the exact local configurations and architecture. It’s almost (but not quite) a case of “hardware, what hardware?”

In fact, because a virtual server creates the illusion of existing independent of its host hardware, there can be an apparency that it doesn’t suffer from the prime performance enemy of physical hard drives: file fragmentation. But even a quick examination of how a virtual server works will show that not only do virtual servers suffer from fragmentation, but fragmentation can have an even more severe effect than on physical servers.

The data being utilized by a virtual machine is still being saved on a hard drive. A single drive or set of drives is supporting a number of virtual machines—called “guest systems”—and data from all of those machines is saved on the drive or set of drives on what is called the “host system.”

A virtual machine has its own I/O request which is relayed to the host system. Hence, multiple I/O requests are occurring for each file request—minimally, one request for the guest system, then another for the host system. When files are split into hundreds or thousands of fragments (not at all uncommon) it means the generation of multiple I/O requests for each fragment of every file. This action is multiplied by the number of virtual machines resident on any host server, and doing the math it can be easily seen that the result is seriously degraded performance.

It is obvious that keeping files in a defragmented state is the answer to virtualization performance. But just as virtualization technology continues to advance, so does performance technology. A solution now exists to automatically prevent a majority of fragmentation before it even occurs, making fragmentation nearly a thing of the past.

Virtual environments do suffer from fragmentation. All sites utilizing this fantastic technology should take care to make sure fragmentation is fully addressed, so that the full benefit of these incredible advances can be realized.

1 comment:

  1. Great informative post and i really likes your information, most of the peoples are likes your blog because its having the good knowledge. thanks for your good informative post.web hosting in india