There have always been all kinds of “free” or “included” items, meant to sweeten the sale of a main product. For example, you buy a new home in a subdivision, and the kitchen appliances—oven, stovetop, microwave, dishwasher—are already built-in. It might also include, “free” a washer and a dryer. Or, you buy a new computer, and it has a built-in camera. On top of that, it may even include free video editing software.
The upside is that these items were free, or included in the overall price. The downside, however, is that you’re now stuck with trying to make them work for the functions you intend. Are those kitchen appliances going to be what you really need for cooking—or would it have been better if you’d been able to pick your own, after you’d thoroughly checked them out? Or, how good is that camera included with your computer going to be? Could you do a professional shoot with it? And, is it possible to perform a competent editing job with the free video software?
Chances are, these items are nowhere near what you actually require in terms of functionality and features.
The same holds true for a “free” or “included” defragmenter. Fragmentation—the splitting of files into pieces, or fragments, for better utilization of disk space—is the primary drain on computer system performance and reliability. If it is possible to obtain a fragmentation solution for free, and that solution does the job, it’s certainly a winning situation.
The problem, however, is that because of the many innovations in today’s computing environments—such as thin provisioning, replication, snapshots, Continuous Data Protection (CDP) and deduplication, to name but a few—it takes more than a defragmenter, free or otherwise, to do the job. An optimization solution, which addresses a broader scope of issues than fragmentation only, is required.
Another issue is that even as a defragmenter, a free product has severe limitations and cannot address the enormous file sizes, voluminous drive capacities and high rates of fragmentation inherent in today’s systems.
A robust optimization solution addresses several aspects of file read and write I/Os in addition to fragmentation—a majority of which is prevented before it even happens. It includes intelligent ordering of files for faster access, and other advanced technologies designed to automatically maximize system performance and reliability. Most importantly, it is truly up to the job of dealing with these important issues in today’s systems.
So carefully check out “free” or “included” items. Upon testing and inspection of features and functionality, you’ll find that you’d be better off paying the relatively inexpensive up-front cost, to save untold waste in time and money down the road.
Wednesday, July 27, 2011
Wednesday, July 20, 2011
Keeping Virtual Machines at High Speed
Every day, there is more innovation involving the use of virtual machines. For example, development is underway to virtualize user PCs when they are not in use, so that the physical machines can be shut down and power can be saved. In another example, virtual machines have given a considerable boost to cloud computing, and new cloud platforms and cloud system management options are constantly appearing on the horizon. Overall, it is clear that virtual machine technology has blown the door to our future wide open.
From a user standpoint, virtual technology will only become simpler. A few keystrokes and a new virtual machine is launched, complete with an operating system and the applications required for the particular tasks they are needed for. As with all computing technology, however, beneath that simple interface there is much occurring—various resources are being shared and coordinated so that work smoothly occurs across multiple platforms and environments.
The simplicity and speed which virtualization provides, however, can be heavily impacted if several key issues are not addressed.
The basic level of I/O reads and writes is one which can determine the speed of the entire environment. Fragmentation, originally developed to make better use of hard drive space, causes files to be split into thousands or tens of thousands of pieces (fragments). Because many extra I/Os are then required for reading and writing, performance can slow to a crawl, and I/O bandwidth will quickly bottleneck.
Due to the multiple layers that an I/O request must pass through within a virtual environment, fragmentation has even more of an impact that it does in hardware-platform-only circumstances. It can even lead to an inability to launch and run more virtual machines.
In virtual environments, fragmentation cannot be dealt with utilizing a simple defragmentation solution. This is because such a solution does not effectively prioritize I/Os, and shared I/O resources are therefore not coordinated.
A condition also exists within virtual environments that can be referred to as virtual disk “bloat.” This is wasted disk space that occurs when virtual disks are set to dynamically grow but don’t then shrink when users or applications remove data.
All of these issues, fortunately, are answered by a single solution: virtual platform disk optimization technology. Fragmentation itself is dealt with by preventing a majority of it before it even occurs. When files exist in a non-fragmented state, far fewer read and write I/Os are needed to handle them, and speed is vastly improved. Virtual machine production needs are fully taken into account as resources are fully coordinated. Wasted disk space is easily eliminated with a compaction feature.
These basic problems can keep virtual technology from providing the simplicity, speed and considerable savings in resources that it should. They can now all be handled with a single virtual machine optimization solution.
From a user standpoint, virtual technology will only become simpler. A few keystrokes and a new virtual machine is launched, complete with an operating system and the applications required for the particular tasks they are needed for. As with all computing technology, however, beneath that simple interface there is much occurring—various resources are being shared and coordinated so that work smoothly occurs across multiple platforms and environments.
The simplicity and speed which virtualization provides, however, can be heavily impacted if several key issues are not addressed.
The basic level of I/O reads and writes is one which can determine the speed of the entire environment. Fragmentation, originally developed to make better use of hard drive space, causes files to be split into thousands or tens of thousands of pieces (fragments). Because many extra I/Os are then required for reading and writing, performance can slow to a crawl, and I/O bandwidth will quickly bottleneck.
Due to the multiple layers that an I/O request must pass through within a virtual environment, fragmentation has even more of an impact that it does in hardware-platform-only circumstances. It can even lead to an inability to launch and run more virtual machines.
In virtual environments, fragmentation cannot be dealt with utilizing a simple defragmentation solution. This is because such a solution does not effectively prioritize I/Os, and shared I/O resources are therefore not coordinated.
A condition also exists within virtual environments that can be referred to as virtual disk “bloat.” This is wasted disk space that occurs when virtual disks are set to dynamically grow but don’t then shrink when users or applications remove data.
All of these issues, fortunately, are answered by a single solution: virtual platform disk optimization technology. Fragmentation itself is dealt with by preventing a majority of it before it even occurs. When files exist in a non-fragmented state, far fewer read and write I/Os are needed to handle them, and speed is vastly improved. Virtual machine production needs are fully taken into account as resources are fully coordinated. Wasted disk space is easily eliminated with a compaction feature.
These basic problems can keep virtual technology from providing the simplicity, speed and considerable savings in resources that it should. They can now all be handled with a single virtual machine optimization solution.
Wednesday, July 13, 2011
Fine-Tuning Computer System Efficiency
Mechanical efficiency—beginning as a way to save effort, time and expenditures—has today become a fine art. An excellent example is the aircraft. In 1903, after years of research and experimentation with wing types, controls, propellers and many other elements, Wilbur and Orville Wright managed to get a 605-pound plane to fly for 12 seconds roughly 10 feet off the ground. Today, a little over 100 years later, aircraft efficiency has become so advanced that we now take an enormous object weighing between 500 and 700 tons, bring it up to a speed of around 400 miles per hour, and cruise it at an altitude of 30,000 feet.
The advances that have made this possible have been made in aerodynamics, fuel efficiency, utilization of space, weight distribution, and many more. And of course it doesn’t stop there; NASA has recently announced new experimental aircraft designs that actually move more people and cargo, yet use far less fuel and are even more aerodynamically efficient.
Similar remarkable innovations have been made in the field of computer efficiency. The first general-purpose electronic computer, ENIAC, put into operation in 1947, weighed more than 27 tons, occupied 1,800 square feet, and contained 17,468 vacuum tubes. Obviously incredible at the time, today’s computers, occupying a tiny fraction of the space and consuming an infinitesimal portion of the power, actually complete many times more work.
Yes, we have come a long way. Today, we store enormous multi-gigabyte files on media that can be held in a palm, yet which has the capacity in the terabyte range. We can even run powerful servers that are virtual, take up no physical space at all and only consume the power of their hosts.
An aspect of computer efficiency that has not been completely conquered, however, is the use of I/O resources. Improper use of these has ramifications that extend all across an enterprise and affect processing speed, drive space, and overall performance.
File fragmentation—the splitting of files into pieces (fragments) in order to better utilize drive space—is a fundamental cause of I/O read and write inefficiency. When files are split into thousands or tens of thousands of fragments, each of the fragments must be obtained by the file system whenever a file is read. Because free space is also fragmented, file writes are also drastically impacted. Overall, havoc is wreaked upon performance and resources.
Defragmentation—for a long time the sole method of addressing fragmentation—is now no longer adequate to the problem. Today’s complex technical innovations require that efficiency take the form of optimization technology, which both maximizes performance and eliminates wasted disk I/O activity. With this technology the majority of fragmentation is now actually prevented before it occurs, while file optimization and other innovations add to and round out the whole solution.
Optimization of I/O reads and writes are the final step in making today’s computing environments completely efficient.
The advances that have made this possible have been made in aerodynamics, fuel efficiency, utilization of space, weight distribution, and many more. And of course it doesn’t stop there; NASA has recently announced new experimental aircraft designs that actually move more people and cargo, yet use far less fuel and are even more aerodynamically efficient.
Similar remarkable innovations have been made in the field of computer efficiency. The first general-purpose electronic computer, ENIAC, put into operation in 1947, weighed more than 27 tons, occupied 1,800 square feet, and contained 17,468 vacuum tubes. Obviously incredible at the time, today’s computers, occupying a tiny fraction of the space and consuming an infinitesimal portion of the power, actually complete many times more work.
Yes, we have come a long way. Today, we store enormous multi-gigabyte files on media that can be held in a palm, yet which has the capacity in the terabyte range. We can even run powerful servers that are virtual, take up no physical space at all and only consume the power of their hosts.
An aspect of computer efficiency that has not been completely conquered, however, is the use of I/O resources. Improper use of these has ramifications that extend all across an enterprise and affect processing speed, drive space, and overall performance.
File fragmentation—the splitting of files into pieces (fragments) in order to better utilize drive space—is a fundamental cause of I/O read and write inefficiency. When files are split into thousands or tens of thousands of fragments, each of the fragments must be obtained by the file system whenever a file is read. Because free space is also fragmented, file writes are also drastically impacted. Overall, havoc is wreaked upon performance and resources.
Defragmentation—for a long time the sole method of addressing fragmentation—is now no longer adequate to the problem. Today’s complex technical innovations require that efficiency take the form of optimization technology, which both maximizes performance and eliminates wasted disk I/O activity. With this technology the majority of fragmentation is now actually prevented before it occurs, while file optimization and other innovations add to and round out the whole solution.
Optimization of I/O reads and writes are the final step in making today’s computing environments completely efficient.
Wednesday, July 6, 2011
As The Virtual World Turns...Optimize It
Virtual machine technology has rapidly expanded since being introduced a few short years ago. Now virtual servers are launched and perform many different types of tasks, and have moved over to take an important role in storage. Virtual machines are now proliferating to become part of the desktop environment—and it appears that PCs will soon be replaced by ultra-thin clients (aka zero clients) that simply act as interfaces for virtual machines.
It appears that our not so distant future will be ensconced completely in the cloud—and nearly all of our computing actions will be virtual. Technologies continue to be evolved to make this possible; the one thing that users, IT staff and corporate executives will not sacrifice is speed of access to data and rapidity of processing. Hence, anything which gets in the way of such performance must be firmly addressed.
As an ever-increasing amount of our computing becomes virtual, the speed of interaction between hardware hosts and virtual machines becomes more critical. Coordination of virtual machines also becomes vital—especially as the quantity of these increases.
Speed of access is dependent upon a basic computer operation: I/O reads and writes. In fact, that level is so important it can actually have a considerable impact on the entire environment. Many additional I/Os can be required for reads and writes when files are in a state of fragmentation. Originally developed for better utilization of hard drive space, fragmentation causes files to be split into tens or hundreds of thousands of pieces (fragments). Because of the additional I/Os required to read and write fragmented files, performance is seriously slowed down and I/O bandwidth bottlenecks occur frequently.
Within a virtual environment, an I/O request must pass through multiple layers. Because of this, fragmentation has even more of a profound impact in a virtual environment than it does in a strictly hardware platform. Left alone, it can even lead to an inability to launch and run more virtual machines.
Due to the complexity of virtual environments, a simple defragmentation solution won’t properly address the situation. In addition to fragmentation itself, I/Os must be prioritized so that shared I/O resources can be properly coordinated. Fragmentation in virtual environments also causes virtual disk “bloat”, in which virtual disks are set to dynamically grow but don’t then shrink when users or applications remove data.
State of the art virtual platform disk optimization technology addresses all of these issues. A majority of fragmentation is actually prevented before it occurs. Virtual machine resources are fully coordinated, and wasted virtual disk space is eliminated with a compaction feature.
As our computing world continues to become increasingly virtual and move into the cloud, keep that world turning with competent optimization.
It appears that our not so distant future will be ensconced completely in the cloud—and nearly all of our computing actions will be virtual. Technologies continue to be evolved to make this possible; the one thing that users, IT staff and corporate executives will not sacrifice is speed of access to data and rapidity of processing. Hence, anything which gets in the way of such performance must be firmly addressed.
As an ever-increasing amount of our computing becomes virtual, the speed of interaction between hardware hosts and virtual machines becomes more critical. Coordination of virtual machines also becomes vital—especially as the quantity of these increases.
Speed of access is dependent upon a basic computer operation: I/O reads and writes. In fact, that level is so important it can actually have a considerable impact on the entire environment. Many additional I/Os can be required for reads and writes when files are in a state of fragmentation. Originally developed for better utilization of hard drive space, fragmentation causes files to be split into tens or hundreds of thousands of pieces (fragments). Because of the additional I/Os required to read and write fragmented files, performance is seriously slowed down and I/O bandwidth bottlenecks occur frequently.
Within a virtual environment, an I/O request must pass through multiple layers. Because of this, fragmentation has even more of a profound impact in a virtual environment than it does in a strictly hardware platform. Left alone, it can even lead to an inability to launch and run more virtual machines.
Due to the complexity of virtual environments, a simple defragmentation solution won’t properly address the situation. In addition to fragmentation itself, I/Os must be prioritized so that shared I/O resources can be properly coordinated. Fragmentation in virtual environments also causes virtual disk “bloat”, in which virtual disks are set to dynamically grow but don’t then shrink when users or applications remove data.
State of the art virtual platform disk optimization technology addresses all of these issues. A majority of fragmentation is actually prevented before it occurs. Virtual machine resources are fully coordinated, and wasted virtual disk space is eliminated with a compaction feature.
As our computing world continues to become increasingly virtual and move into the cloud, keep that world turning with competent optimization.
Subscribe to:
Posts (Atom)