There have always been all kinds of “free” or “included” items, meant to sweeten the sale of a main product. For example, you buy a new home in a subdivision, and the kitchen appliances—oven, stovetop, microwave, dishwasher—are already built-in. It might also include, “free” a washer and a dryer. Or, you buy a new computer, and it has a built-in camera. On top of that, it may even include free video editing software.
The upside is that these items were free, or included in the overall price. The downside, however, is that you’re now stuck with trying to make them work for the functions you intend. Are those kitchen appliances going to be what you really need for cooking—or would it have been better if you’d been able to pick your own, after you’d thoroughly checked them out? Or, how good is that camera included with your computer going to be? Could you do a professional shoot with it? And, is it possible to perform a competent editing job with the free video software?
Chances are, these items are nowhere near what you actually require in terms of functionality and features.
The same holds true for a “free” or “included” defragmenter. Fragmentation—the splitting of files into pieces, or fragments, for better utilization of disk space—is the primary drain on computer system performance and reliability. If it is possible to obtain a fragmentation solution for free, and that solution does the job, it’s certainly a winning situation.
The problem, however, is that because of the many innovations in today’s computing environments—such as thin provisioning, replication, snapshots, Continuous Data Protection (CDP) and deduplication, to name but a few—it takes more than a defragmenter, free or otherwise, to do the job. An optimization solution, which addresses a broader scope of issues than fragmentation only, is required.
Another issue is that even as a defragmenter, a free product has severe limitations and cannot address the enormous file sizes, voluminous drive capacities and high rates of fragmentation inherent in today’s systems.
A robust optimization solution addresses several aspects of file read and write I/Os in addition to fragmentation—a majority of which is prevented before it even happens. It includes intelligent ordering of files for faster access, and other advanced technologies designed to automatically maximize system performance and reliability. Most importantly, it is truly up to the job of dealing with these important issues in today’s systems.
So carefully check out “free” or “included” items. Upon testing and inspection of features and functionality, you’ll find that you’d be better off paying the relatively inexpensive up-front cost, to save untold waste in time and money down the road.
Showing posts with label fragmentation. Show all posts
Showing posts with label fragmentation. Show all posts
Wednesday, July 27, 2011
Wednesday, July 20, 2011
Keeping Virtual Machines at High Speed
Every day, there is more innovation involving the use of virtual machines. For example, development is underway to virtualize user PCs when they are not in use, so that the physical machines can be shut down and power can be saved. In another example, virtual machines have given a considerable boost to cloud computing, and new cloud platforms and cloud system management options are constantly appearing on the horizon. Overall, it is clear that virtual machine technology has blown the door to our future wide open.
From a user standpoint, virtual technology will only become simpler. A few keystrokes and a new virtual machine is launched, complete with an operating system and the applications required for the particular tasks they are needed for. As with all computing technology, however, beneath that simple interface there is much occurring—various resources are being shared and coordinated so that work smoothly occurs across multiple platforms and environments.
The simplicity and speed which virtualization provides, however, can be heavily impacted if several key issues are not addressed.
The basic level of I/O reads and writes is one which can determine the speed of the entire environment. Fragmentation, originally developed to make better use of hard drive space, causes files to be split into thousands or tens of thousands of pieces (fragments). Because many extra I/Os are then required for reading and writing, performance can slow to a crawl, and I/O bandwidth will quickly bottleneck.
Due to the multiple layers that an I/O request must pass through within a virtual environment, fragmentation has even more of an impact that it does in hardware-platform-only circumstances. It can even lead to an inability to launch and run more virtual machines.
In virtual environments, fragmentation cannot be dealt with utilizing a simple defragmentation solution. This is because such a solution does not effectively prioritize I/Os, and shared I/O resources are therefore not coordinated.
A condition also exists within virtual environments that can be referred to as virtual disk “bloat.” This is wasted disk space that occurs when virtual disks are set to dynamically grow but don’t then shrink when users or applications remove data.
All of these issues, fortunately, are answered by a single solution: virtual platform disk optimization technology. Fragmentation itself is dealt with by preventing a majority of it before it even occurs. When files exist in a non-fragmented state, far fewer read and write I/Os are needed to handle them, and speed is vastly improved. Virtual machine production needs are fully taken into account as resources are fully coordinated. Wasted disk space is easily eliminated with a compaction feature.
These basic problems can keep virtual technology from providing the simplicity, speed and considerable savings in resources that it should. They can now all be handled with a single virtual machine optimization solution.
From a user standpoint, virtual technology will only become simpler. A few keystrokes and a new virtual machine is launched, complete with an operating system and the applications required for the particular tasks they are needed for. As with all computing technology, however, beneath that simple interface there is much occurring—various resources are being shared and coordinated so that work smoothly occurs across multiple platforms and environments.
The simplicity and speed which virtualization provides, however, can be heavily impacted if several key issues are not addressed.
The basic level of I/O reads and writes is one which can determine the speed of the entire environment. Fragmentation, originally developed to make better use of hard drive space, causes files to be split into thousands or tens of thousands of pieces (fragments). Because many extra I/Os are then required for reading and writing, performance can slow to a crawl, and I/O bandwidth will quickly bottleneck.
Due to the multiple layers that an I/O request must pass through within a virtual environment, fragmentation has even more of an impact that it does in hardware-platform-only circumstances. It can even lead to an inability to launch and run more virtual machines.
In virtual environments, fragmentation cannot be dealt with utilizing a simple defragmentation solution. This is because such a solution does not effectively prioritize I/Os, and shared I/O resources are therefore not coordinated.
A condition also exists within virtual environments that can be referred to as virtual disk “bloat.” This is wasted disk space that occurs when virtual disks are set to dynamically grow but don’t then shrink when users or applications remove data.
All of these issues, fortunately, are answered by a single solution: virtual platform disk optimization technology. Fragmentation itself is dealt with by preventing a majority of it before it even occurs. When files exist in a non-fragmented state, far fewer read and write I/Os are needed to handle them, and speed is vastly improved. Virtual machine production needs are fully taken into account as resources are fully coordinated. Wasted disk space is easily eliminated with a compaction feature.
These basic problems can keep virtual technology from providing the simplicity, speed and considerable savings in resources that it should. They can now all be handled with a single virtual machine optimization solution.
Wednesday, June 22, 2011
Yes, SAN Does Suffer from Fragmentation
SAN brings many benefits to an enterprise. Stored data does not reside directly on a company’s servers, therefore business applications get the server power, and end users obtain the network capacity that would otherwise be utilized for storage. Administration is more flexible, because there is no need to shift storage cables and devices in order to move storage from one server to another. Servers can even be booted from the SAN itself, greatly shortening the time required to commission a new server.
There are numerous technologies employed to make SAN efficient, including RAID, I/O caching, snapshots and volume cloning, which have lead some to believe that SANs do not suffer the effects of file fragmentation. Fragmentation is the splitting of files into pieces (fragments) originally developed for the purpose of better utilizing disk space in direct attached storage devices.
The problem is that data is read and written by the operating system, and this is done on a logical, not a physical level. The OS’s file system, by its very nature, fragments files. While the data from the viewpoint of the NAS may appear efficiently arranged, from the viewpoint of the file system it is severely fragmented—and will be treated as such.
Fragmentation affects computer operations in numerous ways. Chief among them is performance; due to the fact that files must be written and read in thousands or even hundreds of thousands of fragments, performance is severely slowed down. In a fragmented environment, unexpected system hangs and even disk crashes are common. A heavy toll is taken on hardware, and disks can lose 50 percent or more of their expected lifespans due to all the extra work.
In past times, the solution to the fragmentation issue was a defragmenter. Because of many innovations in today’s computing environments—such as those used with SAN—a higher-level solution is needed. An optimization solution, which addresses a broader scope of issues than fragmentation only, is required.
Such a solution approaches numerous aspects of file read and write I/Os in addition to fragmentation. The majority of fragmentation itself is prevented before it even occurs, but also included is the intelligent ordering of files for faster access, along with other advanced technologies designed to automatically maximize system performance and reliability.
The best proof of fragmentation’s effects on SAN is through the testing of an optimization solution within an enterprise. Doing so, it will be clearly seen that fragmentation does indeed affect SAN operations—and they can only benefit from its elimination.
Wednesday, June 15, 2011
Free Software: Not for the Big Time
There are many free utilities and applications out there, and if what you are doing is meant to be insignificant or small, they’re probably adequate. For example, there are a few free music recording apps available that will allow to you record multiple instruments and vocals to your computer, and then mix the results into a song. But if the resulting track is to be used for professional purposes, you’ll find it sorely lacking; you need Logic Pro, Pro-Tools or the like to even come close to competing in the major markets.
There are numerous free accounting programs—probably fine for keeping track of bake sale income or the like. But used in a company or corporation to track income, accounts payable, expenditures, profit and loss? Hardly. The same could be said for databases; a free one available for download won’t hold a candle to an Oracle or SQL when it comes to business use, and no IT professional would even consider it.
On the utilities side, there are free defragmenters. Unlike the examples given above, these may not even be worth it for the home user, simply due to the nature and quantity of today’s fragmentation. But that argument aside, it is an obvious truth, upon examination, that these freebies are definitely not meant for business or corporate use.
Fragmentation is the splitting of files into pieces (fragments) on a hard drive, to better utilize disk space. A defragmenter is meant to solve this problem by “re-assembling” these files back into a whole, or a nearly whole state. In the corporate environment, the levels of fragmentation are far beyond the capabilities of a free defragmenter to accomplish this task.
A free defragmentation utility must also be scheduled; it has been discovered by anyone having to actually try this that scheduling is practically impossible in today’s enterprises simply because systems are constantly up and running.
But the primary problem with a free defragmenter is that, today, it takes more than defragmentation to truly tackle the resource loss associated with I/O reads and writes. Multi-faceted optimization is, by far, the best approach.
Technology is now available that, instead of defragmenting, actually prevents a majority of fragmentation before it ever occurs. This same technology also orders files for faster access, and performs a number of other vital actions that greatly increase performance, and maximize reliability. All of these functions occur completely automatically, with no scheduling or other operator interference required.
Free software is definitely not meant for the big time. This is doubly true in addressing fragmentation.
Wednesday, June 8, 2011
Safeguarding Performance of Virtual Systems
A company implementing virtual machine technology can expect to reap great rewards. Where before a new server installed would have meant a new physical machine (at the least a rack mount)—along with the power to run it and the space to house it—a server can now be fully deployed and run on an existing hardware platform. It will have everything the physical server would have had, including its own instance of an operating system, applications and tools, but no footprint and a tiny fraction of the once-required power.
In addition to the footprint savings, virtual machines also bring speed to the table. It can be deployed and up and running in minutes instead of hours. It allows users to deploy their own machines—something unheard of in the past. It means a great time savings for users and IT personnel alike.
Virtual technology is now being used for many purposes. For example, it brings a great boost to Storage Area Network (SAN) technology which, in itself, takes an enormous amount of stress off of a system by moving storage traffic off the main production network.
Without proper optimization, however, virtual technology cannot bring the full benefits on which an enterprise depends. A major reason is that virtual technology—along with SAN and other recent innovations—relies, in the end, on the physical hard drive. The drive itself suffers from file fragmentation, which is the state of files and free space being scattered in pieces (fragments) all over the drive. Fragmentation causes severe I/O bottlenecks in virtual systems, due to accelerated fragmentation across multiple platforms.
Virtualization suffers from other issues that are also the result of not being optimized. Virtual machine competition for shared I/O resources is not effectively prioritized across the platform, and virtual disks set to dynamically grow do not resize when data is deleted; instead, free space is wasted.
It is vital that any company implementing virtual technology—and any technology in which it is put to use, such as SAN—employ an underlying solution for optimizing virtual machines. Such a solution optimizes the entire virtual platform, operating invisibly with zero system resource conflicts, so that most fragmentation is prevented from occurring at all. The overall effect is that
unnecessary I/Os passed from the OS to the disk subsystem are minimized, and data is aligned on the drives for previously unattainable levels of speed and reliability.
Additionally, a tool is provided so that space is recovered on virtual disks that have been set to grow dynamically.
Such a virtualization optimization solution should be the foundation of any virtual machine scheme for all enterprises.
Wednesday, May 25, 2011
Eliminating System Bottlenecks
Bottlenecks are no fun in any situation. Ask anyone who has to drive through rush hour traffic in the morning—a freeway bottleneck is where you sit in your car, unmoving, fretting over being late to work. If you get caught up in a security bottleneck in an airport, you’re dashing frantic looks at the clock, worried that you’re going to miss your flight. There are many more such examples, all equally infuriating.
Bottlenecks also occur in computer systems, and can be a major source of frustration for IT personnel. Technically, a bottleneck is a delay in transmission of data through the circuits of a computer's processor or over a network.
In processors, the delay typically occurs when a system's bandwidth cannot support the amount of information being relayed at the speed it is being processed. If all of the components of a system are not able to feed the same amount of data at the same speed, a delay is created. For example, a 2GB processor will be severely bottlenecked by an 800MB memory bandwidth.
In the network situation, the flow of information transmitted across a networks is slowed down. Network connections were originally designed to transmit only text files, and the proliferation of bandwidth-intensive transmissions such as high-resolution graphics has caused bottlenecks in the process. These can also be caused by file system bottlenecks, as will be seen below.
One basic factor that can cause bottlenecks is often overlooked, and should always be addressed first: that of the reading and writing of files. If files are not properly organized on a drive, they are written and read in many pieces—which can jam up memory, processors and networks as well. It takes longer to read the file from the drive into the memory, then from the memory into the processor. The processor then must wait for all the data to be assembled, and is itself slowed down.
Networks become jammed because these many file segments are having to be transmitted individually.
Bottlenecks cause system slows, downtime and increased energy consumption—and without addressing bottlenecks such issues will not be avoided. Additionally, bottlenecks will cause tremendous expenditure in IT hours chasing them down, many times futilely if the actual cause is not tackled.
The right choice to be made by IT staff is to address the reading and writing of files right at the beginning, using performance technology designed to do so. There is even technology available today that prevents a majority of them before they even occur.
Before wasting time attempting to solve performance and network efficiency issues, make sure to address file system bottlenecks completely in the beginning.
Wednesday, May 18, 2011
Balancing the Powerful Simplicity of SAN with Fragmentation Prevention
As with everything in computing, one of the primary goals developers have for storage area networks (SANs) is simplicity. SAN continues to advance with the advent of self-optimization, virtualization, storage provisioning and other benefits which make the technology more automatic. As SAN becomes more accessible and affordable, more companies are taking advantage of its many benefits.
The whole idea of a SAN, of course, is moving storage away from production servers, thereby freeing up resources for active company traffic. High-speed fiber channel networks make it possible for the switch to SAN to be invisible to the user, simply because speeds are similar to direct attachment of disks. Again, the goal is simplicity.
If file fragmentation is not addressed with an equally simple and powerful approach, this goal becomes less obtainable. All of this advanced technology must still read files from hard drives. Due to the splitting of files into thousands or tens of thousands of fragments, unnecessary complexities are introduced into the scheme. And access speed—another prime goal of a SAN—is nullified.
The various technologies being employed to assist SANs in being faster and more automatic each have their own unique susceptibility to fragmentation. For example, thin provisioning allows advance allocation of disk space. But at the same time, the file system may simply write data wherever space is to be found. If data is written to a “high” logical cluster number (say, cluster 200), all clusters from zero to 200 will be allocated even if not used. When data is added to an old file, new files are added or deleted, or an old file is expanded, the difference between file system disk allocation and storage system thin provisioning can contribute to fragmentation, over-allocation, and less efficient use of storage space.
Another example is virtual machines. At the very least, file I/Os are passed through a host and a guest system; file fragmentation wreaks havoc by adding multiple I/Os where there are already plenty extra.
Technology such as SAN requires an equally powerful—and simple—fragmentation solution. Following the trend of full automaticity, the most beneficial solution today is one which functions fully automatically, in the background, requiring no intervention from IT personnel. It is now even possible to prevent a majority of fragmentation before it occurs, putting its complex issues thoroughly in the past.
A SAN can now bring a robust resource saving storage solution to any site. As a companion, the fragmentation solution employed must be comparably robust—and simple.
Subscribe to:
Posts (Atom)