Thursday, August 25, 2011

Keeping Virtualization Costs Down

Virtualization itself is a technology that considerably lowers IT operating costs. Right at the start, multiple servers can be launched and operated without the need for additional hardware. Then come energy savings, the ease and speed of use for users and administrators, and many more economic benefits.  What could actually cause virtualization operating costs to rise?

Virtual machines depend on numerous innovations to operate. A group of VMs all utilize a common hardware platform, to which data is saved and from which it is read. Hence if there are any issues with I/O operations, every virtual machine based in that hardware will be affected.

Issues with I/O read and write operations are some of the top barriers to computer system performance, physical or virtual. But due to the fact that an I/O must pass through multiple layers in a virtual environment, such issues can have an even more profound impact with VMs.

In addition to general slow performance caused by I/O bottlenecks—leading to sluggish speed of VMs, slowed or stalled backups and other major problems—troubles with I/Os are also responsible for other issues that might not so readily be associated with them. For example, because of the excessive I/O activity, hardware life is decreased by 50 percent or more. In that the hardware is the host for VMs, attention to hardware life is crucial.

Also particular to virtual environments is the symptom of slow virtual migration. The task of migrating servers from physical to virtual (known as P2V) or from one type of virtual machine to another is a basic operation in virtual environments. The slowing down of this process can be cumbersome, especially if users or processes are waiting for the new virtual machine. As with the other issues listed above, slow virtual migration can be traced directly to issues with I/O operations.

Because of the many innovations inherent in a virtual environment, a comprehensive virtual platform disk optimizer is required as the solution. The number of I/Os required to read and write files are drastically and automatically reduced. But also solved is coordination of I/O resources, and the address of virtual disk “bloat”—a situation that occurs due to excessive I/Os, and for which there is no other solution.

Issues with I/O operations raise operating costs within a virtual environment across the boards. A virtual platform disk optimizer is the key to keeping them under control.

Thursday, August 18, 2011

Understanding Computer Performance Issues

The larger something is, the more components that go in to make it up, the more complex it will be seen. Think of a simple, one-family home—then compare it to a thousand-unit condo building. Just comprehending the single-family dwelling—its plumbing, electrical and other systems—is relatively easy as compared to the much larger building with miles of pipe and wires, with problems that could occur anywhere along the lines.

This is certainly true of a computer system. 20 servers, along with 500 associated desktop systems, cabling, routers and peripherals, is certainly going to be viewed as more complex than a simple desktop system by itself.

But just because there is a more complicated arrangement, it doesn’t mean that problems associated with performance also must be complex. True, there might be something like a bandwidth-based bottleneck between a couple of the servers that is, by trickle-down effect, minorly affecting the performance of the network, and it might take an IT person several hours to scope it out. But performance problems that are recurring and of a similar nature almost always have a rather simple cause.

If slow performance is occurring across a system, with daily helpdesk calls and complaints coming from all over the company, it’s a sure bet that there are issues with read and write I/Os down at the file system level. There is a substantial excess of reads and writes occurring, due to a condition known as fragmentation. Fragmentation is a natural occurrence in all computer systems; files and free spaces are broken into thousands or tens of thousands of pieces (fragments) in order to better utilize disk space. The uniform result, if fragmentation is not addressed, is impeded performance.

In the past, most enterprises used defragmentation technology to handle the problem. This technology re-assembles fragmented files so that less I/Os are required to access them. Some defrag solutions also consolidate free space so that files can be written in far fewer pieces. Some of these solutions are run manually, some are scheduled, and a few are actually automatic.

Today, however, there is performance software that goes beyond defrag—that actually prevents a majority of these kinds of performance problems before they ever occur. It operates fully automatically—which means that once it’s installed, site-wide slow performance is a thing of the past. Helpdesk calls drop off dramatically, processes complete far quicker, and employee productivity even improves. Best of all, a prime source of slow performance is totally eliminated.

Despite complex systems, it’s a simple problem with a simple solution. It should be addressed first, before spending hours tracing the myriad of other issues that can occur as a result.

Wednesday, August 10, 2011

Barriers to Virtual Machine Performance

Virtual machines (VMs) have created a revolution in computing. The ability to launch a brand-new server with a few keystrokes, utilize it, and then discontinue or change that utilization, is a facility that will only grow with time. The future direction for virtual machines is probably a scenario in which a majority of computing is actually performed on VMs, with minimal hardware only present for hosting purposes.

The technologies being utilized for virtual technology are quite remarkable. They sum up to resources being coordinated and shared in such a way that work gets done across multiple platforms and environments almost as if no barriers existed at all. However, there are several issues that, if not properly addressed, can severely impact virtual machine performance.

First is addressing the issue of I/O reads and writes. If reads and writes are being conducted in the presence of file fragmentation, I/O bandwidth will quickly bottleneck. Fragmentation is the age-old problem of files being split into tens or hundreds of thousands of pieces (fragments) for better utilization of hard drive space.

In a virtual environment, fragmentation has a substantial impact, if only due to the multiple layers that a single I/O request must pass through in a virtual environment. If each I/O is performed for a single file fragment, performance is critically slowed down, and this condition can even lead to an inability to run more VMs.

In dealing with fragmentation, there is also the need for coordination of shared I/O resources across the platform. A simple defragmentation solution will cut across the production needs of VMs, simply because the effective prioritizing of I/Os is not done.

There is also a situation of virtual disk “bloat”-- wasted disk space that takes place when virtual disks are set to dynamically grow but don’t then shrink when users or applications remove data.

Although these barriers are multiple, there is a single answer to them: virtual platform disk optimization technology. The first barrier, fragmentation, is dealt with by preventing a majority of it before it even occurs. Files existing in as few fragments as possible means I/O reads and writes are occurring at maximum speed. Resources are also coordinated across the platform so that VM production needs are fully taken into account.

Such software also contains a compaction feature so that wasted disk space can be easily eliminated.

These barriers can frustrate the management of virtual environments. Fortunately, IT personnel can solve them with a single virtual machine optimization solution.