Showing posts with label virtual. Show all posts
Showing posts with label virtual. Show all posts

Wednesday, August 10, 2011

Barriers to Virtual Machine Performance

Virtual machines (VMs) have created a revolution in computing. The ability to launch a brand-new server with a few keystrokes, utilize it, and then discontinue or change that utilization, is a facility that will only grow with time. The future direction for virtual machines is probably a scenario in which a majority of computing is actually performed on VMs, with minimal hardware only present for hosting purposes.

The technologies being utilized for virtual technology are quite remarkable. They sum up to resources being coordinated and shared in such a way that work gets done across multiple platforms and environments almost as if no barriers existed at all. However, there are several issues that, if not properly addressed, can severely impact virtual machine performance.

First is addressing the issue of I/O reads and writes. If reads and writes are being conducted in the presence of file fragmentation, I/O bandwidth will quickly bottleneck. Fragmentation is the age-old problem of files being split into tens or hundreds of thousands of pieces (fragments) for better utilization of hard drive space.

In a virtual environment, fragmentation has a substantial impact, if only due to the multiple layers that a single I/O request must pass through in a virtual environment. If each I/O is performed for a single file fragment, performance is critically slowed down, and this condition can even lead to an inability to run more VMs.

In dealing with fragmentation, there is also the need for coordination of shared I/O resources across the platform. A simple defragmentation solution will cut across the production needs of VMs, simply because the effective prioritizing of I/Os is not done.

There is also a situation of virtual disk “bloat”-- wasted disk space that takes place when virtual disks are set to dynamically grow but don’t then shrink when users or applications remove data.

Although these barriers are multiple, there is a single answer to them: virtual platform disk optimization technology. The first barrier, fragmentation, is dealt with by preventing a majority of it before it even occurs. Files existing in as few fragments as possible means I/O reads and writes are occurring at maximum speed. Resources are also coordinated across the platform so that VM production needs are fully taken into account.

Such software also contains a compaction feature so that wasted disk space can be easily eliminated.

These barriers can frustrate the management of virtual environments. Fortunately, IT personnel can solve them with a single virtual machine optimization solution.

Wednesday, July 20, 2011

Keeping Virtual Machines at High Speed

Every day, there is more innovation involving the use of virtual machines. For example, development is underway to virtualize user PCs when they are not in use, so that the physical machines can be shut down and power can be saved. In another example, virtual machines have given a considerable boost to cloud computing, and new cloud platforms and cloud system management options are constantly appearing on the horizon. Overall, it is clear that virtual machine technology has blown the door to our future wide open.

From a user standpoint, virtual technology will only become simpler. A few keystrokes and a new virtual machine is launched, complete with an operating system and the applications required for the particular tasks they are needed for. As with all computing technology, however, beneath that simple interface there is much occurring—various resources are being shared and coordinated so that work smoothly occurs across multiple platforms and environments.

The simplicity and speed which virtualization provides, however, can be heavily impacted if several key issues are not addressed.

The basic level of I/O reads and writes is one which can determine the speed of the entire environment. Fragmentation, originally developed to make better use of hard drive space, causes files to be split into thousands or tens of thousands of pieces (fragments). Because many extra I/Os are then required for reading and writing, performance can slow to a crawl, and I/O bandwidth will quickly bottleneck.

Due to the multiple layers that an I/O request must pass through within a virtual environment, fragmentation has even more of an impact that it does in hardware-platform-only circumstances. It can even lead to an inability to launch and run more virtual machines.

In virtual environments, fragmentation cannot be dealt with utilizing a simple defragmentation solution. This is because such a solution does not effectively prioritize I/Os, and shared I/O resources are therefore not coordinated.

A condition also exists within virtual environments that can be referred to as virtual disk “bloat.” This is wasted disk space that occurs when virtual disks are set to dynamically grow but don’t then shrink when users or applications remove data.

All of these issues, fortunately, are answered by a single solution: virtual platform disk optimization technology. Fragmentation itself is dealt with by preventing a majority of it before it even occurs. When files exist in a non-fragmented state, far fewer read and write I/Os are needed to handle them, and speed is vastly improved. Virtual machine production needs are fully taken into account as resources are fully coordinated. Wasted disk space is easily eliminated with a compaction feature.

These basic problems can keep virtual technology from providing the simplicity, speed and considerable savings in resources that it should. They can now all be handled with a single virtual machine optimization solution.

Wednesday, February 16, 2011

Fragmentation Prevention and Virtualization

From a long-term viewpoint, virtualization has always been a primary goal of computing. The extreme of this term, of course, would be virtual environments, such as those found on the holodeck of the fictional starship Enterprise, where entire worlds can be virtualized at will. Although R&D labs are secretly working to get us closer to such a scenario, we’re not there yet and probably have a ways to go. What we do have, though, is something almost equally as fascinating from a technical standpoint: virtual computers.

We have now reached the point where a user can actually generate and launch a virtual server within minutes. The server shares hardware resources with several others, but exists as its own whole and independent computing entity. Advances continue to be made; it has recently been announced that such virtual environments can be rapidly moved to a third-party cloud, with replications of the exact local configurations and architecture. It’s almost (but not quite) a case of “hardware, what hardware?”

In fact, because a virtual server creates the illusion of existing independent of its host hardware, there can be an apparency that it doesn’t suffer from the prime performance enemy of physical hard drives: file fragmentation. But even a quick examination of how a virtual server works will show that not only do virtual servers suffer from fragmentation, but fragmentation can have an even more severe effect than on physical servers.

The data being utilized by a virtual machine is still being saved on a hard drive. A single drive or set of drives is supporting a number of virtual machines—called “guest systems”—and data from all of those machines is saved on the drive or set of drives on what is called the “host system.”

A virtual machine has its own I/O request which is relayed to the host system. Hence, multiple I/O requests are occurring for each file request—minimally, one request for the guest system, then another for the host system. When files are split into hundreds or thousands of fragments (not at all uncommon) it means the generation of multiple I/O requests for each fragment of every file. This action is multiplied by the number of virtual machines resident on any host server, and doing the math it can be easily seen that the result is seriously degraded performance.

It is obvious that keeping files in a defragmented state is the answer to virtualization performance. But just as virtualization technology continues to advance, so does performance technology. A solution now exists to automatically prevent a majority of fragmentation before it even occurs, making fragmentation nearly a thing of the past.

Virtual environments do suffer from fragmentation. All sites utilizing this fantastic technology should take care to make sure fragmentation is fully addressed, so that the full benefit of these incredible advances can be realized.