Thursday, August 25, 2011

Keeping Virtualization Costs Down

Virtualization itself is a technology that considerably lowers IT operating costs. Right at the start, multiple servers can be launched and operated without the need for additional hardware. Then come energy savings, the ease and speed of use for users and administrators, and many more economic benefits.  What could actually cause virtualization operating costs to rise?

Virtual machines depend on numerous innovations to operate. A group of VMs all utilize a common hardware platform, to which data is saved and from which it is read. Hence if there are any issues with I/O operations, every virtual machine based in that hardware will be affected.

Issues with I/O read and write operations are some of the top barriers to computer system performance, physical or virtual. But due to the fact that an I/O must pass through multiple layers in a virtual environment, such issues can have an even more profound impact with VMs.

In addition to general slow performance caused by I/O bottlenecks—leading to sluggish speed of VMs, slowed or stalled backups and other major problems—troubles with I/Os are also responsible for other issues that might not so readily be associated with them. For example, because of the excessive I/O activity, hardware life is decreased by 50 percent or more. In that the hardware is the host for VMs, attention to hardware life is crucial.

Also particular to virtual environments is the symptom of slow virtual migration. The task of migrating servers from physical to virtual (known as P2V) or from one type of virtual machine to another is a basic operation in virtual environments. The slowing down of this process can be cumbersome, especially if users or processes are waiting for the new virtual machine. As with the other issues listed above, slow virtual migration can be traced directly to issues with I/O operations.

Because of the many innovations inherent in a virtual environment, a comprehensive virtual platform disk optimizer is required as the solution. The number of I/Os required to read and write files are drastically and automatically reduced. But also solved is coordination of I/O resources, and the address of virtual disk “bloat”—a situation that occurs due to excessive I/Os, and for which there is no other solution.

Issues with I/O operations raise operating costs within a virtual environment across the boards. A virtual platform disk optimizer is the key to keeping them under control.

Thursday, August 18, 2011

Understanding Computer Performance Issues

The larger something is, the more components that go in to make it up, the more complex it will be seen. Think of a simple, one-family home—then compare it to a thousand-unit condo building. Just comprehending the single-family dwelling—its plumbing, electrical and other systems—is relatively easy as compared to the much larger building with miles of pipe and wires, with problems that could occur anywhere along the lines.

This is certainly true of a computer system. 20 servers, along with 500 associated desktop systems, cabling, routers and peripherals, is certainly going to be viewed as more complex than a simple desktop system by itself.

But just because there is a more complicated arrangement, it doesn’t mean that problems associated with performance also must be complex. True, there might be something like a bandwidth-based bottleneck between a couple of the servers that is, by trickle-down effect, minorly affecting the performance of the network, and it might take an IT person several hours to scope it out. But performance problems that are recurring and of a similar nature almost always have a rather simple cause.

If slow performance is occurring across a system, with daily helpdesk calls and complaints coming from all over the company, it’s a sure bet that there are issues with read and write I/Os down at the file system level. There is a substantial excess of reads and writes occurring, due to a condition known as fragmentation. Fragmentation is a natural occurrence in all computer systems; files and free spaces are broken into thousands or tens of thousands of pieces (fragments) in order to better utilize disk space. The uniform result, if fragmentation is not addressed, is impeded performance.

In the past, most enterprises used defragmentation technology to handle the problem. This technology re-assembles fragmented files so that less I/Os are required to access them. Some defrag solutions also consolidate free space so that files can be written in far fewer pieces. Some of these solutions are run manually, some are scheduled, and a few are actually automatic.

Today, however, there is performance software that goes beyond defrag—that actually prevents a majority of these kinds of performance problems before they ever occur. It operates fully automatically—which means that once it’s installed, site-wide slow performance is a thing of the past. Helpdesk calls drop off dramatically, processes complete far quicker, and employee productivity even improves. Best of all, a prime source of slow performance is totally eliminated.

Despite complex systems, it’s a simple problem with a simple solution. It should be addressed first, before spending hours tracing the myriad of other issues that can occur as a result.

Wednesday, August 10, 2011

Barriers to Virtual Machine Performance

Virtual machines (VMs) have created a revolution in computing. The ability to launch a brand-new server with a few keystrokes, utilize it, and then discontinue or change that utilization, is a facility that will only grow with time. The future direction for virtual machines is probably a scenario in which a majority of computing is actually performed on VMs, with minimal hardware only present for hosting purposes.

The technologies being utilized for virtual technology are quite remarkable. They sum up to resources being coordinated and shared in such a way that work gets done across multiple platforms and environments almost as if no barriers existed at all. However, there are several issues that, if not properly addressed, can severely impact virtual machine performance.

First is addressing the issue of I/O reads and writes. If reads and writes are being conducted in the presence of file fragmentation, I/O bandwidth will quickly bottleneck. Fragmentation is the age-old problem of files being split into tens or hundreds of thousands of pieces (fragments) for better utilization of hard drive space.

In a virtual environment, fragmentation has a substantial impact, if only due to the multiple layers that a single I/O request must pass through in a virtual environment. If each I/O is performed for a single file fragment, performance is critically slowed down, and this condition can even lead to an inability to run more VMs.

In dealing with fragmentation, there is also the need for coordination of shared I/O resources across the platform. A simple defragmentation solution will cut across the production needs of VMs, simply because the effective prioritizing of I/Os is not done.

There is also a situation of virtual disk “bloat”-- wasted disk space that takes place when virtual disks are set to dynamically grow but don’t then shrink when users or applications remove data.

Although these barriers are multiple, there is a single answer to them: virtual platform disk optimization technology. The first barrier, fragmentation, is dealt with by preventing a majority of it before it even occurs. Files existing in as few fragments as possible means I/O reads and writes are occurring at maximum speed. Resources are also coordinated across the platform so that VM production needs are fully taken into account.

Such software also contains a compaction feature so that wasted disk space can be easily eliminated.

These barriers can frustrate the management of virtual environments. Fortunately, IT personnel can solve them with a single virtual machine optimization solution.

Wednesday, July 27, 2011

How Good is the “Free” or “Included” Product?

There have always been all kinds of “free” or “included” items, meant to sweeten the sale of a main product. For example, you buy a new home in a subdivision, and the kitchen appliances—oven, stovetop, microwave, dishwasher—are already built-in. It might also include, “free” a washer and a dryer. Or, you buy a new computer, and it has a built-in camera. On top of that, it may even include free video editing software.

The upside is that these items were free, or included in the overall price. The downside, however, is that you’re now stuck with trying to make them work for the functions you intend. Are those kitchen appliances going to be what you really need for cooking—or would it have been better if you’d been able to pick your own, after you’d thoroughly checked them out? Or, how good is that camera included with your computer going to be? Could you do a professional shoot with it? And, is it possible to perform a competent editing job with the free video software?

Chances are, these items are nowhere near what you actually require in terms of functionality and features.

The same holds true for a “free” or “included” defragmenter. Fragmentation—the splitting of files into pieces, or fragments, for better utilization of disk space—is the primary drain on computer system performance and reliability. If it is possible to obtain a fragmentation solution for free, and that solution does the job, it’s certainly a winning situation.

The problem, however, is that because of the many innovations in today’s computing environments—such as thin provisioning, replication, snapshots, Continuous Data Protection (CDP) and deduplication, to name but a few—it takes more than a defragmenter, free or otherwise, to do the job. An optimization solution, which addresses a broader scope of issues than fragmentation only, is required.

Another issue is that even as a defragmenter, a free product has severe limitations and cannot address the enormous file sizes, voluminous drive capacities and high rates of fragmentation inherent in today’s systems.

A robust optimization solution addresses several aspects of file read and write I/Os in addition to fragmentation—a majority of which is prevented before it even happens. It includes  intelligent ordering of files for faster access, and other advanced technologies designed to automatically maximize system performance and reliability. Most importantly, it is truly up to the job of dealing with these important issues in today’s systems.

So carefully check out “free” or “included” items. Upon testing and inspection of features and functionality, you’ll find that you’d be better off paying the relatively inexpensive up-front cost, to save untold waste in time and money down the road.

Wednesday, July 20, 2011

Keeping Virtual Machines at High Speed

Every day, there is more innovation involving the use of virtual machines. For example, development is underway to virtualize user PCs when they are not in use, so that the physical machines can be shut down and power can be saved. In another example, virtual machines have given a considerable boost to cloud computing, and new cloud platforms and cloud system management options are constantly appearing on the horizon. Overall, it is clear that virtual machine technology has blown the door to our future wide open.

From a user standpoint, virtual technology will only become simpler. A few keystrokes and a new virtual machine is launched, complete with an operating system and the applications required for the particular tasks they are needed for. As with all computing technology, however, beneath that simple interface there is much occurring—various resources are being shared and coordinated so that work smoothly occurs across multiple platforms and environments.

The simplicity and speed which virtualization provides, however, can be heavily impacted if several key issues are not addressed.

The basic level of I/O reads and writes is one which can determine the speed of the entire environment. Fragmentation, originally developed to make better use of hard drive space, causes files to be split into thousands or tens of thousands of pieces (fragments). Because many extra I/Os are then required for reading and writing, performance can slow to a crawl, and I/O bandwidth will quickly bottleneck.

Due to the multiple layers that an I/O request must pass through within a virtual environment, fragmentation has even more of an impact that it does in hardware-platform-only circumstances. It can even lead to an inability to launch and run more virtual machines.

In virtual environments, fragmentation cannot be dealt with utilizing a simple defragmentation solution. This is because such a solution does not effectively prioritize I/Os, and shared I/O resources are therefore not coordinated.

A condition also exists within virtual environments that can be referred to as virtual disk “bloat.” This is wasted disk space that occurs when virtual disks are set to dynamically grow but don’t then shrink when users or applications remove data.

All of these issues, fortunately, are answered by a single solution: virtual platform disk optimization technology. Fragmentation itself is dealt with by preventing a majority of it before it even occurs. When files exist in a non-fragmented state, far fewer read and write I/Os are needed to handle them, and speed is vastly improved. Virtual machine production needs are fully taken into account as resources are fully coordinated. Wasted disk space is easily eliminated with a compaction feature.

These basic problems can keep virtual technology from providing the simplicity, speed and considerable savings in resources that it should. They can now all be handled with a single virtual machine optimization solution.

Wednesday, July 13, 2011

Fine-Tuning Computer System Efficiency

Mechanical efficiency—beginning as a way to save effort, time and expenditures—has today become a fine art. An excellent example is the aircraft. In 1903, after years of research and experimentation with wing types, controls, propellers and many other elements, Wilbur and Orville Wright managed to get a 605-pound plane to fly for 12 seconds roughly 10 feet off the ground. Today, a little over 100 years later, aircraft efficiency has become so advanced that we now take an enormous object weighing between 500 and 700 tons, bring it up to a speed of around 400 miles per hour, and cruise it at an altitude of 30,000 feet.

The advances that have made this possible have been made in aerodynamics, fuel efficiency, utilization of space, weight distribution, and many more. And of course it doesn’t stop there; NASA has recently announced new experimental aircraft designs that actually move more people and cargo, yet use far less fuel and are even more aerodynamically efficient.

Similar remarkable innovations have been made in the field of computer efficiency. The first general-purpose electronic computer, ENIAC, put into operation in 1947, weighed more than 27 tons, occupied 1,800 square feet, and contained 17,468 vacuum tubes. Obviously incredible at the time, today’s computers, occupying a tiny fraction of the space and consuming an infinitesimal portion of the power, actually complete many times more work.

Yes, we have come a long way. Today, we store enormous multi-gigabyte files on media that can be held in a palm, yet which has the capacity in the terabyte range. We can even run powerful servers that are virtual, take up no physical space at all and only consume the power of their hosts.

An aspect of computer efficiency that has not been completely conquered, however, is the use of I/O resources. Improper use of these has ramifications that extend all across an enterprise and affect processing speed, drive space, and overall performance.

File fragmentation—the splitting of files into pieces (fragments) in order to better utilize drive space—is a fundamental cause of I/O read and write inefficiency. When files are split into thousands or tens of thousands of fragments, each of the fragments must be obtained by the file system whenever a file is read. Because free space is also fragmented, file writes are also drastically impacted. Overall, havoc is wreaked upon performance and resources.

Defragmentation—for a long time the sole method of addressing fragmentation—is now no longer adequate to the problem. Today’s complex technical innovations require that efficiency take the form of optimization technology, which both maximizes performance and eliminates wasted disk I/O activity. With this technology the majority of fragmentation is now actually prevented before it occurs, while file optimization and other innovations add to and round out the whole solution.

Optimization of I/O reads and writes are the final step in making today’s computing environments completely efficient.

Wednesday, July 6, 2011

As The Virtual World Turns...Optimize It

Virtual machine technology has rapidly expanded since being introduced a few short years ago. Now virtual servers are launched and perform many different types of tasks, and have moved over to take an important role in storage. Virtual machines are now proliferating to become part of the desktop environment—and it appears that PCs will soon be replaced by ultra-thin clients (aka zero clients) that simply act as interfaces for virtual machines.

It appears that our not so distant future will be ensconced completely in the cloud—and nearly all of our computing actions will be virtual. Technologies continue to be evolved to make this possible; the one thing that users, IT staff and corporate executives will not sacrifice is speed of access to data and rapidity of processing. Hence, anything which gets in the way of such performance must be firmly addressed.

As an ever-increasing amount of our computing becomes virtual, the speed of interaction between hardware hosts and virtual machines becomes more critical. Coordination of virtual machines also becomes vital—especially as the quantity of these increases.

Speed of access is dependent upon a basic computer operation: I/O reads and writes. In fact, that level is so important it can actually have a considerable impact on the entire environment. Many additional I/Os can be required for reads and writes when files are in a state of fragmentation. Originally developed for better utilization of hard drive space, fragmentation causes files to be split into tens or hundreds of thousands of pieces (fragments). Because of the additional I/Os required to read and write fragmented files, performance is seriously slowed down and I/O bandwidth bottlenecks occur frequently.

Within a virtual environment, an I/O request must pass through multiple layers. Because of this, fragmentation has even more of a profound impact in a virtual environment than it does in a strictly hardware platform. Left alone, it can even lead to an inability to launch and run more virtual machines.

Due to the complexity of virtual environments, a simple defragmentation solution won’t properly address the situation. In addition to fragmentation itself, I/Os must be prioritized so that shared I/O  resources can be properly coordinated. Fragmentation in virtual environments also causes virtual disk “bloat”, in which virtual disks are set to dynamically grow but don’t then shrink when users or applications remove data.

State of the art virtual platform disk optimization technology addresses all of these issues. A majority of fragmentation is actually prevented before it occurs. Virtual machine resources are fully coordinated, and wasted virtual disk space is eliminated with a compaction feature.

As our computing world continues to become increasingly virtual and move into the cloud, keep that world turning with competent optimization.

Wednesday, June 22, 2011

Yes, SAN Does Suffer from Fragmentation

SAN brings many benefits to an enterprise. Stored data does not reside directly on a company’s servers, therefore business applications get the server power, and end users obtain the network capacity that would otherwise be utilized for storage. Administration is more flexible, because there is no need to shift storage cables and devices in order to move storage from one server to another. Servers can even be booted from the SAN itself, greatly shortening the time required to commission a new server.

There are numerous technologies employed to make SAN efficient, including RAID,  I/O caching, snapshots and volume cloning, which have lead some to believe that SANs do not suffer the effects of file fragmentation. Fragmentation is the splitting of files into pieces (fragments) originally developed for the purpose of better utilizing disk space in direct attached storage devices.

The problem is that data is read and written by the operating system, and this is done on a logical, not a physical level. The OS’s file system, by its very nature, fragments files. While the data from the viewpoint of the NAS may appear efficiently arranged, from the viewpoint of the file system it is severely fragmented—and will be treated as such.

Fragmentation affects computer operations in numerous ways. Chief among them is performance; due to the fact that files must be written and read in thousands or even hundreds of thousands of fragments, performance is severely slowed down. In a fragmented environment, unexpected system hangs and even disk crashes are common. A heavy toll is taken on hardware, and disks can lose 50 percent or more of their expected lifespans due to all the extra work.

In past times, the solution to the fragmentation issue was a defragmenter. Because of many innovations in today’s computing environments—such as those used with SAN—a higher-level solution is needed. An optimization solution, which addresses a broader scope of issues than fragmentation only, is required.

Such a solution approaches numerous aspects of file read and write I/Os in addition to fragmentation. The majority of fragmentation itself is prevented before it even occurs, but also included is the intelligent ordering of files for faster access, along with other advanced technologies designed to automatically maximize system performance and reliability.

The best proof of fragmentation’s effects on SAN is through the testing of an optimization solution within an enterprise. Doing so, it will be clearly seen that fragmentation does indeed affect SAN operations—and they can only benefit from its elimination.

Wednesday, June 15, 2011

Free Software: Not for the Big Time

There are many free utilities and applications out there, and if what you are doing is meant to be insignificant or small, they’re probably adequate. For example, there are a few free music recording apps available that will allow to you record multiple instruments and vocals to your computer, and then mix the results into a song. But if the resulting track is to be used for professional purposes, you’ll find it sorely lacking; you need Logic Pro, Pro-Tools or the like to even come close to competing in the major markets.

There are numerous free accounting programs—probably fine for keeping track of bake sale income or the like. But used in a company or corporation to track income, accounts payable, expenditures, profit and loss? Hardly. The same could be said for databases; a free one available for download won’t hold a candle to an Oracle or SQL when it comes to business use, and no IT professional would even consider it.

On the utilities side, there are free defragmenters. Unlike the examples given above, these may not even be worth it for the home user, simply due to the nature and quantity of today’s fragmentation. But that argument aside, it is an obvious truth, upon examination, that these freebies are definitely not meant for business or corporate use.

Fragmentation is the splitting of files into pieces (fragments) on a hard drive, to better utilize disk space. A defragmenter is meant to solve this problem by “re-assembling” these files back into a whole, or a nearly whole state. In the corporate environment, the levels of fragmentation are far beyond the capabilities of a free defragmenter to accomplish this task.

A free defragmentation utility must also be scheduled; it has been discovered by anyone having to actually try this that scheduling is practically impossible in today’s enterprises simply because systems are constantly up and running.

But the primary problem with a free defragmenter is that, today, it takes more than defragmentation to truly tackle the resource loss associated with I/O reads and writes. Multi-faceted optimization is, by far, the best approach.

Technology is now available that, instead of defragmenting, actually prevents a majority of fragmentation before it ever occurs. This same technology also orders files for faster access, and performs a number of other vital actions that greatly increase performance, and maximize reliability. All of these functions occur completely automatically, with no scheduling or other operator interference required.

Free software is definitely not meant for the big time. This is doubly true in addressing fragmentation.

Wednesday, June 8, 2011

Safeguarding Performance of Virtual Systems

A company implementing virtual machine technology can expect to reap great rewards. Where before a new server installed would have meant a new physical machine (at the least a rack mount)—along with the power to run it and the space to house it—a server can now be fully deployed and run on an existing hardware platform. It will have everything the physical server would have had, including its own instance of an operating system, applications and tools, but no footprint and a tiny fraction of the once-required power.

In addition to the footprint savings, virtual machines also bring speed to the table. It can be deployed and up and running in minutes instead of hours. It allows users to deploy their own machines—something unheard of in the past. It means a great time savings for users and IT personnel alike.

Virtual technology is now being used for many purposes. For example, it brings a great boost to Storage Area Network (SAN) technology which, in itself, takes an enormous amount of stress off of a system by moving storage traffic off the main production network.

Without proper optimization, however, virtual technology cannot bring the full benefits on which an enterprise depends. A major reason is that virtual technology—along with SAN and other recent innovations—relies, in the end, on the physical hard drive. The drive itself suffers from file fragmentation, which is the state of files and free space being scattered in pieces (fragments) all over the drive. Fragmentation causes severe I/O bottlenecks in virtual systems, due to accelerated fragmentation across multiple platforms. 

Virtualization suffers from other issues that are also the result of not being optimized. Virtual machine competition for shared I/O resources is not effectively prioritized across the platform, and virtual disks set to dynamically grow do not resize when data is deleted; instead, free space is wasted. 

It is vital that any company implementing virtual technology—and any technology in which it is put to use, such as SAN—employ an underlying solution for optimizing virtual machines. Such a solution optimizes the entire virtual platform, operating invisibly with zero system resource conflicts, so that most fragmentation is prevented from occurring at all. The overall effect is that
unnecessary I/Os passed from the OS to the disk subsystem are minimized, and data is aligned  on the drives for previously unattainable levels of speed and reliability.

Additionally, a tool is provided so that space is recovered on virtual disks that have been set to grow dynamically.

Such a virtualization optimization solution should be the foundation of any virtual machine scheme for all enterprises.