Showing posts with label defragmentation. Show all posts
Showing posts with label defragmentation. Show all posts

Thursday, August 25, 2011

Keeping Virtualization Costs Down

Virtualization itself is a technology that considerably lowers IT operating costs. Right at the start, multiple servers can be launched and operated without the need for additional hardware. Then come energy savings, the ease and speed of use for users and administrators, and many more economic benefits.  What could actually cause virtualization operating costs to rise?

Virtual machines depend on numerous innovations to operate. A group of VMs all utilize a common hardware platform, to which data is saved and from which it is read. Hence if there are any issues with I/O operations, every virtual machine based in that hardware will be affected.

Issues with I/O read and write operations are some of the top barriers to computer system performance, physical or virtual. But due to the fact that an I/O must pass through multiple layers in a virtual environment, such issues can have an even more profound impact with VMs.

In addition to general slow performance caused by I/O bottlenecks—leading to sluggish speed of VMs, slowed or stalled backups and other major problems—troubles with I/Os are also responsible for other issues that might not so readily be associated with them. For example, because of the excessive I/O activity, hardware life is decreased by 50 percent or more. In that the hardware is the host for VMs, attention to hardware life is crucial.

Also particular to virtual environments is the symptom of slow virtual migration. The task of migrating servers from physical to virtual (known as P2V) or from one type of virtual machine to another is a basic operation in virtual environments. The slowing down of this process can be cumbersome, especially if users or processes are waiting for the new virtual machine. As with the other issues listed above, slow virtual migration can be traced directly to issues with I/O operations.

Because of the many innovations inherent in a virtual environment, a comprehensive virtual platform disk optimizer is required as the solution. The number of I/Os required to read and write files are drastically and automatically reduced. But also solved is coordination of I/O resources, and the address of virtual disk “bloat”—a situation that occurs due to excessive I/Os, and for which there is no other solution.

Issues with I/O operations raise operating costs within a virtual environment across the boards. A virtual platform disk optimizer is the key to keeping them under control.

Wednesday, August 10, 2011

Barriers to Virtual Machine Performance

Virtual machines (VMs) have created a revolution in computing. The ability to launch a brand-new server with a few keystrokes, utilize it, and then discontinue or change that utilization, is a facility that will only grow with time. The future direction for virtual machines is probably a scenario in which a majority of computing is actually performed on VMs, with minimal hardware only present for hosting purposes.

The technologies being utilized for virtual technology are quite remarkable. They sum up to resources being coordinated and shared in such a way that work gets done across multiple platforms and environments almost as if no barriers existed at all. However, there are several issues that, if not properly addressed, can severely impact virtual machine performance.

First is addressing the issue of I/O reads and writes. If reads and writes are being conducted in the presence of file fragmentation, I/O bandwidth will quickly bottleneck. Fragmentation is the age-old problem of files being split into tens or hundreds of thousands of pieces (fragments) for better utilization of hard drive space.

In a virtual environment, fragmentation has a substantial impact, if only due to the multiple layers that a single I/O request must pass through in a virtual environment. If each I/O is performed for a single file fragment, performance is critically slowed down, and this condition can even lead to an inability to run more VMs.

In dealing with fragmentation, there is also the need for coordination of shared I/O resources across the platform. A simple defragmentation solution will cut across the production needs of VMs, simply because the effective prioritizing of I/Os is not done.

There is also a situation of virtual disk “bloat”-- wasted disk space that takes place when virtual disks are set to dynamically grow but don’t then shrink when users or applications remove data.

Although these barriers are multiple, there is a single answer to them: virtual platform disk optimization technology. The first barrier, fragmentation, is dealt with by preventing a majority of it before it even occurs. Files existing in as few fragments as possible means I/O reads and writes are occurring at maximum speed. Resources are also coordinated across the platform so that VM production needs are fully taken into account.

Such software also contains a compaction feature so that wasted disk space can be easily eliminated.

These barriers can frustrate the management of virtual environments. Fortunately, IT personnel can solve them with a single virtual machine optimization solution.

Wednesday, June 8, 2011

Safeguarding Performance of Virtual Systems

A company implementing virtual machine technology can expect to reap great rewards. Where before a new server installed would have meant a new physical machine (at the least a rack mount)—along with the power to run it and the space to house it—a server can now be fully deployed and run on an existing hardware platform. It will have everything the physical server would have had, including its own instance of an operating system, applications and tools, but no footprint and a tiny fraction of the once-required power.

In addition to the footprint savings, virtual machines also bring speed to the table. It can be deployed and up and running in minutes instead of hours. It allows users to deploy their own machines—something unheard of in the past. It means a great time savings for users and IT personnel alike.

Virtual technology is now being used for many purposes. For example, it brings a great boost to Storage Area Network (SAN) technology which, in itself, takes an enormous amount of stress off of a system by moving storage traffic off the main production network.

Without proper optimization, however, virtual technology cannot bring the full benefits on which an enterprise depends. A major reason is that virtual technology—along with SAN and other recent innovations—relies, in the end, on the physical hard drive. The drive itself suffers from file fragmentation, which is the state of files and free space being scattered in pieces (fragments) all over the drive. Fragmentation causes severe I/O bottlenecks in virtual systems, due to accelerated fragmentation across multiple platforms. 

Virtualization suffers from other issues that are also the result of not being optimized. Virtual machine competition for shared I/O resources is not effectively prioritized across the platform, and virtual disks set to dynamically grow do not resize when data is deleted; instead, free space is wasted. 

It is vital that any company implementing virtual technology—and any technology in which it is put to use, such as SAN—employ an underlying solution for optimizing virtual machines. Such a solution optimizes the entire virtual platform, operating invisibly with zero system resource conflicts, so that most fragmentation is prevented from occurring at all. The overall effect is that
unnecessary I/Os passed from the OS to the disk subsystem are minimized, and data is aligned  on the drives for previously unattainable levels of speed and reliability.

Additionally, a tool is provided so that space is recovered on virtual disks that have been set to grow dynamically.

Such a virtualization optimization solution should be the foundation of any virtual machine scheme for all enterprises.

Wednesday, March 30, 2011

The Liabilities of Free Software

Anything obtained for free can certainly seem like a benefit. And sometimes it is, like the rare time when you stumble upon someone giving away a certain item that turns out to be worth much more than originally suspected, or to be much more useful than at first glance. In the case of software, however, free isn’t necessarily good, for a number of reasons.

There are different varieties of free software. The variety that is actually free, with no strings, may or may not suit your needs. Many times it doesn’t, and when actually examined, the reasons for this are pretty obvious. Prime among them is the fact that you can’t hire great software engineers for free, nor obtain the necessary development and testing hardware. This takes money. Hence, free products built by an individual or developed through an open-source scenario don’t have the robust engineering that has gone into paid-for software.

Another prime reason such software may not work well is that the developers involved, while well-intentioned, are not necessarily experts in the area the software is designed to address. Companies that specialize in, say, anti-virus, have the budget to pour into researching the most effective ways to combat computer viruses using the least amount of computer resources. They are also able to stay constantly abreast of the latest viruses and update their users. Or, developers that are expert in and focus on defragmentation have found ways to keep systems free of system-crippling file fragmentation, and will consistently be on top of operating system changes and anything else that affects the efficiency of their product.

The above reason can also apply to software companies attempting to be a “one size fits all,”  designing and selling software in areas they are not necessarily expert in simply in an effort to retain customers that have purchased their other products.

Another variety of free software is free trials. These can be more helpful, especially if all features are available. They are not always, though, so it is advisable to check. And in any case, they’ll almost always have time limitations on them. Eventually, you do better to purchase the full version of the product.

In finding the right software, the best advice that can be given is, free or not, check the functionality. Make sure it has the features that you actually need. Seek out users that have used the product through personal contact or through forums. If you can find something that genuinely does the job for free, great. But most of the time, you’ll find that purchasing a full version of a product from a company expert in the area you’re aiming to address, will be your safest and, in the long run, your most economic choice.

Thursday, February 10, 2011

"Yes, We Do Need Fragmentation Prevention!"


BURBANK, CA--Once upon a time there was a company just like yours. Despite tough times, they managed to keep business going, their bottom line improving, and between money they had saved and their good credit managed to obtain a brand new computer system. It had lightning-fast servers, cool new desktop systems for the employees, a high-bandwidth network and the latest in peripherals. Additionally, there was a Storage Attached Network (SAN) for data security and fast retrieval, and virtual machine technology to help make the very most out of resources.

The company executives made one mistake, however: in the course of the purchase, despite the IT staff clamoring for it, they nixed the purchase of a robust third-party fragmentation solution, insisting that the built-in defrag solution would suffice.

Initially, performance of the system seemed fine and hummed right along. It wasn’t long, however, before the helpdesk phone starting ringing with complaints of slow performance. At first they were isolated—but then they increased. Each and every one of those complaints had to be chased down, and they all basically led to nowhere except fragmentation. The IT personnel made notes to “schedule the defragmenter” but given their overly busy schedules the time was never available for it.

Not far down the road, additional complaints started coming in of “unexplained” process hangs. Again, the IT staff knew what was behind the problem, but did what they could to optimize the processes when possible, and made yet another note to “run the defragmenter” in time that would never materialize.

The symptoms that the IT personnel knew stemmed from fragmentation only worsened. There were crashes where there shouldn’t have been. Hardware was failing before its time. Virtual machines were performing poorly, and the SAN was not delivering the speed that had been promised for it.

Finally, one morning, the IT director came in to find that last night’s backup had failed. At this point, she had had enough. She went to the executives that had stopped the purchase of a robust fragmentation solution, and explained to them that a modern company such as theirs had to have a fragmentation solution that would keep up with all the latest in technology that they were running. It had to be fully automatic, so that scheduling didn’t have to be done. It had to constantly address the fragmentation problem so that it was never an issue.

The IT director then explained that there was even a solution that prevented a majority of fragmentation from ever occurring—and the price of the solution would be a fraction of the costs they were already being incurred by IT staff constantly tackling fragmentation-related issues.

Finally, the executives understood. "Yes," they said. "We do need fragmentation prevention!"

And the company had maximum performance and reliability ever after.

Wednesday, February 2, 2011

System Reliability is Always Crucial

Both hardware and software makers work hard to insure reliability, with technologies such as distributed computing and failover , and applications written to work as compatibly as possible with hardware, operating systems and other applications. A primary block to reliability, however, which can sometimes be unseen by IT personnel and users alike, is disk file fragmentation. Fragmentation can cause common reliability issues such as system hangs and even crashes. When crucial files are in tens, hundreds or even sometimes thousands of fragments, retrieving a single file becomes a considerable tax on already-strained system resources. As many IT personnel know to well, too much strain will definitely make reliability questionable.

The escalating sizes of today’s disks and files have caused file fragmentation to occur at much higher levels than past systems. Couple that with the severely increased demand on servers due to the Web, crucial CRM applications, and modern databases, and without a solution one has a recipe for disaster. On today’s servers, a fragmentation solution is mandatory.

Quite in addition to having such a solution, however, attention should be paid to the defragmentation solution as well, especially in regard to site volume and requirements. For most sites, manual defragmentation—the case-by-case launching of a defragmenter when desired or needed—is no longer an option due to fragmentation levels and time required for a defragmenter to run. For many years, defragmenters have been available with scheduling options which would allow specific times for defragmentation to be set, so that defragmentation would occur frequently enough to keep fragmentation under control and at times when impact on system resources wouldn’t be an issue.

In the last few years, however, even scheduled defragmentation is starting to become out-of-date. In addition to the extra IT time it takes to set schedules and ensure they are optimum, scheduled defragmenting is being found not to be adequate to keep up with the exponential increase in fragmentation rates. Hence, evolution of fragmentation technology that will work constantly in the background, with no impact on system resources, has begun to appear.

Reliability of today’s system, especially servers, is obviously vital. Tools such as the Windows Reliability and Performance Monitor will always be used to check and maintain system reliability. But as a standard action in maintaining system reliability, fragmentation should always be addressed.