Virtualization itself is a technology that considerably lowers IT operating costs. Right at the start, multiple servers can be launched and operated without the need for additional hardware. Then come energy savings, the ease and speed of use for users and administrators, and many more economic benefits. What could actually cause virtualization operating costs to rise?
Virtual machines depend on numerous innovations to operate. A group of VMs all utilize a common hardware platform, to which data is saved and from which it is read. Hence if there are any issues with I/O operations, every virtual machine based in that hardware will be affected.
Issues with I/O read and write operations are some of the top barriers to computer system performance, physical or virtual. But due to the fact that an I/O must pass through multiple layers in a virtual environment, such issues can have an even more profound impact with VMs.
In addition to general slow performance caused by I/O bottlenecks—leading to sluggish speed of VMs, slowed or stalled backups and other major problems—troubles with I/Os are also responsible for other issues that might not so readily be associated with them. For example, because of the excessive I/O activity, hardware life is decreased by 50 percent or more. In that the hardware is the host for VMs, attention to hardware life is crucial.
Also particular to virtual environments is the symptom of slow virtual migration. The task of migrating servers from physical to virtual (known as P2V) or from one type of virtual machine to another is a basic operation in virtual environments. The slowing down of this process can be cumbersome, especially if users or processes are waiting for the new virtual machine. As with the other issues listed above, slow virtual migration can be traced directly to issues with I/O operations.
Because of the many innovations inherent in a virtual environment, a comprehensive virtual platform disk optimizer is required as the solution. The number of I/Os required to read and write files are drastically and automatically reduced. But also solved is coordination of I/O resources, and the address of virtual disk “bloat”—a situation that occurs due to excessive I/Os, and for which there is no other solution.
Issues with I/O operations raise operating costs within a virtual environment across the boards. A virtual platform disk optimizer is the key to keeping them under control.
Showing posts with label defrag. Show all posts
Showing posts with label defrag. Show all posts
Thursday, August 25, 2011
Wednesday, August 10, 2011
Barriers to Virtual Machine Performance
Virtual machines (VMs) have created a revolution in computing. The ability to launch a brand-new server with a few keystrokes, utilize it, and then discontinue or change that utilization, is a facility that will only grow with time. The future direction for virtual machines is probably a scenario in which a majority of computing is actually performed on VMs, with minimal hardware only present for hosting purposes.
The technologies being utilized for virtual technology are quite remarkable. They sum up to resources being coordinated and shared in such a way that work gets done across multiple platforms and environments almost as if no barriers existed at all. However, there are several issues that, if not properly addressed, can severely impact virtual machine performance.
First is addressing the issue of I/O reads and writes. If reads and writes are being conducted in the presence of file fragmentation, I/O bandwidth will quickly bottleneck. Fragmentation is the age-old problem of files being split into tens or hundreds of thousands of pieces (fragments) for better utilization of hard drive space.
In a virtual environment, fragmentation has a substantial impact, if only due to the multiple layers that a single I/O request must pass through in a virtual environment. If each I/O is performed for a single file fragment, performance is critically slowed down, and this condition can even lead to an inability to run more VMs.
In dealing with fragmentation, there is also the need for coordination of shared I/O resources across the platform. A simple defragmentation solution will cut across the production needs of VMs, simply because the effective prioritizing of I/Os is not done.
There is also a situation of virtual disk “bloat”-- wasted disk space that takes place when virtual disks are set to dynamically grow but don’t then shrink when users or applications remove data.
Although these barriers are multiple, there is a single answer to them: virtual platform disk optimization technology. The first barrier, fragmentation, is dealt with by preventing a majority of it before it even occurs. Files existing in as few fragments as possible means I/O reads and writes are occurring at maximum speed. Resources are also coordinated across the platform so that VM production needs are fully taken into account.
Such software also contains a compaction feature so that wasted disk space can be easily eliminated.
These barriers can frustrate the management of virtual environments. Fortunately, IT personnel can solve them with a single virtual machine optimization solution.
The technologies being utilized for virtual technology are quite remarkable. They sum up to resources being coordinated and shared in such a way that work gets done across multiple platforms and environments almost as if no barriers existed at all. However, there are several issues that, if not properly addressed, can severely impact virtual machine performance.
First is addressing the issue of I/O reads and writes. If reads and writes are being conducted in the presence of file fragmentation, I/O bandwidth will quickly bottleneck. Fragmentation is the age-old problem of files being split into tens or hundreds of thousands of pieces (fragments) for better utilization of hard drive space.
In a virtual environment, fragmentation has a substantial impact, if only due to the multiple layers that a single I/O request must pass through in a virtual environment. If each I/O is performed for a single file fragment, performance is critically slowed down, and this condition can even lead to an inability to run more VMs.
In dealing with fragmentation, there is also the need for coordination of shared I/O resources across the platform. A simple defragmentation solution will cut across the production needs of VMs, simply because the effective prioritizing of I/Os is not done.
There is also a situation of virtual disk “bloat”-- wasted disk space that takes place when virtual disks are set to dynamically grow but don’t then shrink when users or applications remove data.
Although these barriers are multiple, there is a single answer to them: virtual platform disk optimization technology. The first barrier, fragmentation, is dealt with by preventing a majority of it before it even occurs. Files existing in as few fragments as possible means I/O reads and writes are occurring at maximum speed. Resources are also coordinated across the platform so that VM production needs are fully taken into account.
Such software also contains a compaction feature so that wasted disk space can be easily eliminated.
These barriers can frustrate the management of virtual environments. Fortunately, IT personnel can solve them with a single virtual machine optimization solution.
Wednesday, July 20, 2011
Keeping Virtual Machines at High Speed
Every day, there is more innovation involving the use of virtual machines. For example, development is underway to virtualize user PCs when they are not in use, so that the physical machines can be shut down and power can be saved. In another example, virtual machines have given a considerable boost to cloud computing, and new cloud platforms and cloud system management options are constantly appearing on the horizon. Overall, it is clear that virtual machine technology has blown the door to our future wide open.
From a user standpoint, virtual technology will only become simpler. A few keystrokes and a new virtual machine is launched, complete with an operating system and the applications required for the particular tasks they are needed for. As with all computing technology, however, beneath that simple interface there is much occurring—various resources are being shared and coordinated so that work smoothly occurs across multiple platforms and environments.
The simplicity and speed which virtualization provides, however, can be heavily impacted if several key issues are not addressed.
The basic level of I/O reads and writes is one which can determine the speed of the entire environment. Fragmentation, originally developed to make better use of hard drive space, causes files to be split into thousands or tens of thousands of pieces (fragments). Because many extra I/Os are then required for reading and writing, performance can slow to a crawl, and I/O bandwidth will quickly bottleneck.
Due to the multiple layers that an I/O request must pass through within a virtual environment, fragmentation has even more of an impact that it does in hardware-platform-only circumstances. It can even lead to an inability to launch and run more virtual machines.
In virtual environments, fragmentation cannot be dealt with utilizing a simple defragmentation solution. This is because such a solution does not effectively prioritize I/Os, and shared I/O resources are therefore not coordinated.
A condition also exists within virtual environments that can be referred to as virtual disk “bloat.” This is wasted disk space that occurs when virtual disks are set to dynamically grow but don’t then shrink when users or applications remove data.
All of these issues, fortunately, are answered by a single solution: virtual platform disk optimization technology. Fragmentation itself is dealt with by preventing a majority of it before it even occurs. When files exist in a non-fragmented state, far fewer read and write I/Os are needed to handle them, and speed is vastly improved. Virtual machine production needs are fully taken into account as resources are fully coordinated. Wasted disk space is easily eliminated with a compaction feature.
These basic problems can keep virtual technology from providing the simplicity, speed and considerable savings in resources that it should. They can now all be handled with a single virtual machine optimization solution.
From a user standpoint, virtual technology will only become simpler. A few keystrokes and a new virtual machine is launched, complete with an operating system and the applications required for the particular tasks they are needed for. As with all computing technology, however, beneath that simple interface there is much occurring—various resources are being shared and coordinated so that work smoothly occurs across multiple platforms and environments.
The simplicity and speed which virtualization provides, however, can be heavily impacted if several key issues are not addressed.
The basic level of I/O reads and writes is one which can determine the speed of the entire environment. Fragmentation, originally developed to make better use of hard drive space, causes files to be split into thousands or tens of thousands of pieces (fragments). Because many extra I/Os are then required for reading and writing, performance can slow to a crawl, and I/O bandwidth will quickly bottleneck.
Due to the multiple layers that an I/O request must pass through within a virtual environment, fragmentation has even more of an impact that it does in hardware-platform-only circumstances. It can even lead to an inability to launch and run more virtual machines.
In virtual environments, fragmentation cannot be dealt with utilizing a simple defragmentation solution. This is because such a solution does not effectively prioritize I/Os, and shared I/O resources are therefore not coordinated.
A condition also exists within virtual environments that can be referred to as virtual disk “bloat.” This is wasted disk space that occurs when virtual disks are set to dynamically grow but don’t then shrink when users or applications remove data.
All of these issues, fortunately, are answered by a single solution: virtual platform disk optimization technology. Fragmentation itself is dealt with by preventing a majority of it before it even occurs. When files exist in a non-fragmented state, far fewer read and write I/Os are needed to handle them, and speed is vastly improved. Virtual machine production needs are fully taken into account as resources are fully coordinated. Wasted disk space is easily eliminated with a compaction feature.
These basic problems can keep virtual technology from providing the simplicity, speed and considerable savings in resources that it should. They can now all be handled with a single virtual machine optimization solution.
Wednesday, July 13, 2011
Fine-Tuning Computer System Efficiency
Mechanical efficiency—beginning as a way to save effort, time and expenditures—has today become a fine art. An excellent example is the aircraft. In 1903, after years of research and experimentation with wing types, controls, propellers and many other elements, Wilbur and Orville Wright managed to get a 605-pound plane to fly for 12 seconds roughly 10 feet off the ground. Today, a little over 100 years later, aircraft efficiency has become so advanced that we now take an enormous object weighing between 500 and 700 tons, bring it up to a speed of around 400 miles per hour, and cruise it at an altitude of 30,000 feet.
The advances that have made this possible have been made in aerodynamics, fuel efficiency, utilization of space, weight distribution, and many more. And of course it doesn’t stop there; NASA has recently announced new experimental aircraft designs that actually move more people and cargo, yet use far less fuel and are even more aerodynamically efficient.
Similar remarkable innovations have been made in the field of computer efficiency. The first general-purpose electronic computer, ENIAC, put into operation in 1947, weighed more than 27 tons, occupied 1,800 square feet, and contained 17,468 vacuum tubes. Obviously incredible at the time, today’s computers, occupying a tiny fraction of the space and consuming an infinitesimal portion of the power, actually complete many times more work.
Yes, we have come a long way. Today, we store enormous multi-gigabyte files on media that can be held in a palm, yet which has the capacity in the terabyte range. We can even run powerful servers that are virtual, take up no physical space at all and only consume the power of their hosts.
An aspect of computer efficiency that has not been completely conquered, however, is the use of I/O resources. Improper use of these has ramifications that extend all across an enterprise and affect processing speed, drive space, and overall performance.
File fragmentation—the splitting of files into pieces (fragments) in order to better utilize drive space—is a fundamental cause of I/O read and write inefficiency. When files are split into thousands or tens of thousands of fragments, each of the fragments must be obtained by the file system whenever a file is read. Because free space is also fragmented, file writes are also drastically impacted. Overall, havoc is wreaked upon performance and resources.
Defragmentation—for a long time the sole method of addressing fragmentation—is now no longer adequate to the problem. Today’s complex technical innovations require that efficiency take the form of optimization technology, which both maximizes performance and eliminates wasted disk I/O activity. With this technology the majority of fragmentation is now actually prevented before it occurs, while file optimization and other innovations add to and round out the whole solution.
Optimization of I/O reads and writes are the final step in making today’s computing environments completely efficient.
The advances that have made this possible have been made in aerodynamics, fuel efficiency, utilization of space, weight distribution, and many more. And of course it doesn’t stop there; NASA has recently announced new experimental aircraft designs that actually move more people and cargo, yet use far less fuel and are even more aerodynamically efficient.
Similar remarkable innovations have been made in the field of computer efficiency. The first general-purpose electronic computer, ENIAC, put into operation in 1947, weighed more than 27 tons, occupied 1,800 square feet, and contained 17,468 vacuum tubes. Obviously incredible at the time, today’s computers, occupying a tiny fraction of the space and consuming an infinitesimal portion of the power, actually complete many times more work.
Yes, we have come a long way. Today, we store enormous multi-gigabyte files on media that can be held in a palm, yet which has the capacity in the terabyte range. We can even run powerful servers that are virtual, take up no physical space at all and only consume the power of their hosts.
An aspect of computer efficiency that has not been completely conquered, however, is the use of I/O resources. Improper use of these has ramifications that extend all across an enterprise and affect processing speed, drive space, and overall performance.
File fragmentation—the splitting of files into pieces (fragments) in order to better utilize drive space—is a fundamental cause of I/O read and write inefficiency. When files are split into thousands or tens of thousands of fragments, each of the fragments must be obtained by the file system whenever a file is read. Because free space is also fragmented, file writes are also drastically impacted. Overall, havoc is wreaked upon performance and resources.
Defragmentation—for a long time the sole method of addressing fragmentation—is now no longer adequate to the problem. Today’s complex technical innovations require that efficiency take the form of optimization technology, which both maximizes performance and eliminates wasted disk I/O activity. With this technology the majority of fragmentation is now actually prevented before it occurs, while file optimization and other innovations add to and round out the whole solution.
Optimization of I/O reads and writes are the final step in making today’s computing environments completely efficient.
Wednesday, June 22, 2011
Yes, SAN Does Suffer from Fragmentation
SAN brings many benefits to an enterprise. Stored data does not reside directly on a company’s servers, therefore business applications get the server power, and end users obtain the network capacity that would otherwise be utilized for storage. Administration is more flexible, because there is no need to shift storage cables and devices in order to move storage from one server to another. Servers can even be booted from the SAN itself, greatly shortening the time required to commission a new server.
There are numerous technologies employed to make SAN efficient, including RAID, I/O caching, snapshots and volume cloning, which have lead some to believe that SANs do not suffer the effects of file fragmentation. Fragmentation is the splitting of files into pieces (fragments) originally developed for the purpose of better utilizing disk space in direct attached storage devices.
The problem is that data is read and written by the operating system, and this is done on a logical, not a physical level. The OS’s file system, by its very nature, fragments files. While the data from the viewpoint of the NAS may appear efficiently arranged, from the viewpoint of the file system it is severely fragmented—and will be treated as such.
Fragmentation affects computer operations in numerous ways. Chief among them is performance; due to the fact that files must be written and read in thousands or even hundreds of thousands of fragments, performance is severely slowed down. In a fragmented environment, unexpected system hangs and even disk crashes are common. A heavy toll is taken on hardware, and disks can lose 50 percent or more of their expected lifespans due to all the extra work.
In past times, the solution to the fragmentation issue was a defragmenter. Because of many innovations in today’s computing environments—such as those used with SAN—a higher-level solution is needed. An optimization solution, which addresses a broader scope of issues than fragmentation only, is required.
Such a solution approaches numerous aspects of file read and write I/Os in addition to fragmentation. The majority of fragmentation itself is prevented before it even occurs, but also included is the intelligent ordering of files for faster access, along with other advanced technologies designed to automatically maximize system performance and reliability.
The best proof of fragmentation’s effects on SAN is through the testing of an optimization solution within an enterprise. Doing so, it will be clearly seen that fragmentation does indeed affect SAN operations—and they can only benefit from its elimination.
Wednesday, May 25, 2011
Eliminating System Bottlenecks
Bottlenecks are no fun in any situation. Ask anyone who has to drive through rush hour traffic in the morning—a freeway bottleneck is where you sit in your car, unmoving, fretting over being late to work. If you get caught up in a security bottleneck in an airport, you’re dashing frantic looks at the clock, worried that you’re going to miss your flight. There are many more such examples, all equally infuriating.
Bottlenecks also occur in computer systems, and can be a major source of frustration for IT personnel. Technically, a bottleneck is a delay in transmission of data through the circuits of a computer's processor or over a network.
In processors, the delay typically occurs when a system's bandwidth cannot support the amount of information being relayed at the speed it is being processed. If all of the components of a system are not able to feed the same amount of data at the same speed, a delay is created. For example, a 2GB processor will be severely bottlenecked by an 800MB memory bandwidth.
In the network situation, the flow of information transmitted across a networks is slowed down. Network connections were originally designed to transmit only text files, and the proliferation of bandwidth-intensive transmissions such as high-resolution graphics has caused bottlenecks in the process. These can also be caused by file system bottlenecks, as will be seen below.
One basic factor that can cause bottlenecks is often overlooked, and should always be addressed first: that of the reading and writing of files. If files are not properly organized on a drive, they are written and read in many pieces—which can jam up memory, processors and networks as well. It takes longer to read the file from the drive into the memory, then from the memory into the processor. The processor then must wait for all the data to be assembled, and is itself slowed down.
Networks become jammed because these many file segments are having to be transmitted individually.
Bottlenecks cause system slows, downtime and increased energy consumption—and without addressing bottlenecks such issues will not be avoided. Additionally, bottlenecks will cause tremendous expenditure in IT hours chasing them down, many times futilely if the actual cause is not tackled.
The right choice to be made by IT staff is to address the reading and writing of files right at the beginning, using performance technology designed to do so. There is even technology available today that prevents a majority of them before they even occur.
Before wasting time attempting to solve performance and network efficiency issues, make sure to address file system bottlenecks completely in the beginning.
Wednesday, May 18, 2011
Balancing the Powerful Simplicity of SAN with Fragmentation Prevention
As with everything in computing, one of the primary goals developers have for storage area networks (SANs) is simplicity. SAN continues to advance with the advent of self-optimization, virtualization, storage provisioning and other benefits which make the technology more automatic. As SAN becomes more accessible and affordable, more companies are taking advantage of its many benefits.
The whole idea of a SAN, of course, is moving storage away from production servers, thereby freeing up resources for active company traffic. High-speed fiber channel networks make it possible for the switch to SAN to be invisible to the user, simply because speeds are similar to direct attachment of disks. Again, the goal is simplicity.
If file fragmentation is not addressed with an equally simple and powerful approach, this goal becomes less obtainable. All of this advanced technology must still read files from hard drives. Due to the splitting of files into thousands or tens of thousands of fragments, unnecessary complexities are introduced into the scheme. And access speed—another prime goal of a SAN—is nullified.
The various technologies being employed to assist SANs in being faster and more automatic each have their own unique susceptibility to fragmentation. For example, thin provisioning allows advance allocation of disk space. But at the same time, the file system may simply write data wherever space is to be found. If data is written to a “high” logical cluster number (say, cluster 200), all clusters from zero to 200 will be allocated even if not used. When data is added to an old file, new files are added or deleted, or an old file is expanded, the difference between file system disk allocation and storage system thin provisioning can contribute to fragmentation, over-allocation, and less efficient use of storage space.
Another example is virtual machines. At the very least, file I/Os are passed through a host and a guest system; file fragmentation wreaks havoc by adding multiple I/Os where there are already plenty extra.
Technology such as SAN requires an equally powerful—and simple—fragmentation solution. Following the trend of full automaticity, the most beneficial solution today is one which functions fully automatically, in the background, requiring no intervention from IT personnel. It is now even possible to prevent a majority of fragmentation before it occurs, putting its complex issues thoroughly in the past.
A SAN can now bring a robust resource saving storage solution to any site. As a companion, the fragmentation solution employed must be comparably robust—and simple.
Wednesday, April 13, 2011
How Efficient Does Your System Have to Be?
The word “efficient” is defined as, “Performing or functioning in the best possible manner with the least waste of time and effort.” It could also be defined as, “The extent to which time or effort is well used for the intended task or purpose.”
The word certainly can be, and usually is, applied to a business process. The objective of any business process is to get the highest quality work done in the least amount of time, with a minimum of effort. Similarly, "efficiency" also applies to mechanics, and has going back hundreds of years. It means getting the most mechanical production for the least amount of energy output. Efficiency in business processes, mechanics and energy also has a keen impact on economics.
A computer system is unique in the regard that it directly impacts all these elements: business processes, mechanics, energy and therefore economics. Hence a computer system must be as efficient as possible in every aspect of its operation.
Many innovations have contributed to the highest-ever efficiency we see in systems today. Components use the least amount of power to render maximum processing and storage. Form factors have become increasing small so as not to over-utilize another aspect of efficiency: space. Probably the most interesting of these innovations is the virtual machine, which relies only on the power of its host and takes up no physical space at all.
One aspect of computer system efficiency that is still sometimes overlooked, however, is the use of I/O resources. Inefficient use of these impacts all other computer resources: drive space, hardware life, processing and backup speed, and—worst of all—performance.
A primary cause of I/O resource inefficiency is file fragmentation. A natural function of a file system, fragmentation means the splitting of files into pieces (fragments) in order to better utilize drive space. It is not uncommon for a file to be split into thousands or tens of thousands of fragments. It is the fact that each and every one of those fragments must be obtained whenever that file is accessed that wreaks such havoc on performance and resources.
For many years, defragmentation was the only method of addressing fragmentation. But because of today’s complex technical innovations, efficiency now comes in the form of optimization technology, which both maximizes performance and eliminates wasted disk I/O activity. The majority of fragmentation is now prevented, while file optimization and other innovations are combined to complete the solution.
This solution puts the final touch on—and completely maximizes—computer system efficiency.
Subscribe to:
Posts (Atom)