Wednesday, February 23, 2011

Preventing Fragmentation in The Cloud

With the trend in shrinking corporate form-factors and heavier reliance on the Internet, cloud computing seems to be the major trend of the future. Why spend the money on costly resources in terms of space and computer systems if they can simply be hosted by a third-party company—well off the premises—and only used and paid for as needed? The other added advantage is that they can be accessed from anywhere, over the web, which fosters yet another movement of the future: virtual offices.

Many new technologies are part and parcel of cloud computing, including virtual machines, SAN, NAS and other leading innovations. There are also now applications and systems dedicated solely to cloud computing itself.

But if a company is going to give a good portion (or all) of its computing needs over to a third party cloud provider, the one demand they are going to have above all others is response. When an employee goes looking for a record, it needs to appear quickly. When billing needs to occur, it needs to happen fast so the company gets paid. As inventory is updated, the database needs to reflect changes right away and be able to provide accurate and up-to-date information to anyone who needs it.

The one factor that drastically affects that performance—and can literally bring that cloud to the ground—is file fragmentation. Underneath all the technical advances that have resulted in cloud computing, data is still saved the way it always has been: on hard drives. File systems natively fragment files saved on hard drives, and that fragmentation results not only in slow performance, but in system hangs, shortened hard drive life and many other reliability problems as well.

A cloud must always be accessible—which means that there is never going to be time in which fragmentation can be addressed off-line. That means that any solution chosen will have to be one that consistently addresses fragmentation and keeps it completely at bay—fully automatically.

The best and most state-of-the-art technology available today actually prevents a majority of fragmentation from even occurring. This means that the cloud can continue to function at peak performance always—and that clients that have entrusted their computing resources to the cloud provider will always have fast and reliable access to their data.

The cloud is rapidly becoming the new computing paradigm for businesses of all sizes. Keeping that cloud aloft means eliminating file fragmentation as an issue.

Wednesday, February 16, 2011

Fragmentation Prevention and Virtualization

From a long-term viewpoint, virtualization has always been a primary goal of computing. The extreme of this term, of course, would be virtual environments, such as those found on the holodeck of the fictional starship Enterprise, where entire worlds can be virtualized at will. Although R&D labs are secretly working to get us closer to such a scenario, we’re not there yet and probably have a ways to go. What we do have, though, is something almost equally as fascinating from a technical standpoint: virtual computers.

We have now reached the point where a user can actually generate and launch a virtual server within minutes. The server shares hardware resources with several others, but exists as its own whole and independent computing entity. Advances continue to be made; it has recently been announced that such virtual environments can be rapidly moved to a third-party cloud, with replications of the exact local configurations and architecture. It’s almost (but not quite) a case of “hardware, what hardware?”

In fact, because a virtual server creates the illusion of existing independent of its host hardware, there can be an apparency that it doesn’t suffer from the prime performance enemy of physical hard drives: file fragmentation. But even a quick examination of how a virtual server works will show that not only do virtual servers suffer from fragmentation, but fragmentation can have an even more severe effect than on physical servers.

The data being utilized by a virtual machine is still being saved on a hard drive. A single drive or set of drives is supporting a number of virtual machines—called “guest systems”—and data from all of those machines is saved on the drive or set of drives on what is called the “host system.”

A virtual machine has its own I/O request which is relayed to the host system. Hence, multiple I/O requests are occurring for each file request—minimally, one request for the guest system, then another for the host system. When files are split into hundreds or thousands of fragments (not at all uncommon) it means the generation of multiple I/O requests for each fragment of every file. This action is multiplied by the number of virtual machines resident on any host server, and doing the math it can be easily seen that the result is seriously degraded performance.

It is obvious that keeping files in a defragmented state is the answer to virtualization performance. But just as virtualization technology continues to advance, so does performance technology. A solution now exists to automatically prevent a majority of fragmentation before it even occurs, making fragmentation nearly a thing of the past.

Virtual environments do suffer from fragmentation. All sites utilizing this fantastic technology should take care to make sure fragmentation is fully addressed, so that the full benefit of these incredible advances can be realized.

Thursday, February 10, 2011

"Yes, We Do Need Fragmentation Prevention!"


BURBANK, CA--Once upon a time there was a company just like yours. Despite tough times, they managed to keep business going, their bottom line improving, and between money they had saved and their good credit managed to obtain a brand new computer system. It had lightning-fast servers, cool new desktop systems for the employees, a high-bandwidth network and the latest in peripherals. Additionally, there was a Storage Attached Network (SAN) for data security and fast retrieval, and virtual machine technology to help make the very most out of resources.

The company executives made one mistake, however: in the course of the purchase, despite the IT staff clamoring for it, they nixed the purchase of a robust third-party fragmentation solution, insisting that the built-in defrag solution would suffice.

Initially, performance of the system seemed fine and hummed right along. It wasn’t long, however, before the helpdesk phone starting ringing with complaints of slow performance. At first they were isolated—but then they increased. Each and every one of those complaints had to be chased down, and they all basically led to nowhere except fragmentation. The IT personnel made notes to “schedule the defragmenter” but given their overly busy schedules the time was never available for it.

Not far down the road, additional complaints started coming in of “unexplained” process hangs. Again, the IT staff knew what was behind the problem, but did what they could to optimize the processes when possible, and made yet another note to “run the defragmenter” in time that would never materialize.

The symptoms that the IT personnel knew stemmed from fragmentation only worsened. There were crashes where there shouldn’t have been. Hardware was failing before its time. Virtual machines were performing poorly, and the SAN was not delivering the speed that had been promised for it.

Finally, one morning, the IT director came in to find that last night’s backup had failed. At this point, she had had enough. She went to the executives that had stopped the purchase of a robust fragmentation solution, and explained to them that a modern company such as theirs had to have a fragmentation solution that would keep up with all the latest in technology that they were running. It had to be fully automatic, so that scheduling didn’t have to be done. It had to constantly address the fragmentation problem so that it was never an issue.

The IT director then explained that there was even a solution that prevented a majority of fragmentation from ever occurring—and the price of the solution would be a fraction of the costs they were already being incurred by IT staff constantly tackling fragmentation-related issues.

Finally, the executives understood. "Yes," they said. "We do need fragmentation prevention!"

And the company had maximum performance and reliability ever after.

Wednesday, February 2, 2011

System Reliability is Always Crucial

Both hardware and software makers work hard to insure reliability, with technologies such as distributed computing and failover , and applications written to work as compatibly as possible with hardware, operating systems and other applications. A primary block to reliability, however, which can sometimes be unseen by IT personnel and users alike, is disk file fragmentation. Fragmentation can cause common reliability issues such as system hangs and even crashes. When crucial files are in tens, hundreds or even sometimes thousands of fragments, retrieving a single file becomes a considerable tax on already-strained system resources. As many IT personnel know to well, too much strain will definitely make reliability questionable.

The escalating sizes of today’s disks and files have caused file fragmentation to occur at much higher levels than past systems. Couple that with the severely increased demand on servers due to the Web, crucial CRM applications, and modern databases, and without a solution one has a recipe for disaster. On today’s servers, a fragmentation solution is mandatory.

Quite in addition to having such a solution, however, attention should be paid to the defragmentation solution as well, especially in regard to site volume and requirements. For most sites, manual defragmentation—the case-by-case launching of a defragmenter when desired or needed—is no longer an option due to fragmentation levels and time required for a defragmenter to run. For many years, defragmenters have been available with scheduling options which would allow specific times for defragmentation to be set, so that defragmentation would occur frequently enough to keep fragmentation under control and at times when impact on system resources wouldn’t be an issue.

In the last few years, however, even scheduled defragmentation is starting to become out-of-date. In addition to the extra IT time it takes to set schedules and ensure they are optimum, scheduled defragmenting is being found not to be adequate to keep up with the exponential increase in fragmentation rates. Hence, evolution of fragmentation technology that will work constantly in the background, with no impact on system resources, has begun to appear.

Reliability of today’s system, especially servers, is obviously vital. Tools such as the Windows Reliability and Performance Monitor will always be used to check and maintain system reliability. But as a standard action in maintaining system reliability, fragmentation should always be addressed.