Wednesday, May 25, 2011

Eliminating System Bottlenecks

Bottlenecks are no fun in any situation. Ask anyone who has to drive through rush hour traffic in the morning—a freeway bottleneck is where you sit in your car, unmoving, fretting over being late to work. If you get caught up in a security bottleneck in an airport, you’re dashing frantic looks at the clock, worried that you’re going to miss your flight. There are many more such examples, all equally infuriating.

Bottlenecks also occur in computer systems, and can be a major source of frustration for IT personnel. Technically, a bottleneck is a delay in transmission of data through the circuits of a computer's processor or over a network.

In processors, the delay typically occurs when a system's bandwidth cannot support the amount of information being relayed at the speed it is being processed. If all of the components of a system are not able to feed the same amount of data at the same speed, a delay is created. For example, a 2GB processor will be severely bottlenecked by an 800MB memory bandwidth.

In the network situation, the flow of information transmitted across a networks is slowed down. Network connections were originally designed to transmit only text files, and the proliferation of bandwidth-intensive transmissions such as high-resolution graphics has caused bottlenecks in the process. These can also be caused by file system bottlenecks, as will be seen below.

One basic factor that can cause bottlenecks is often overlooked, and should always be addressed first: that of the reading and writing of files. If files are not properly organized on a drive,  they are written and read in many pieces—which can jam up memory, processors and networks as well. It takes longer to read the file from the drive into the memory, then from the memory into the processor. The processor then must wait for all the data to be assembled, and is itself slowed down.

Networks become jammed because these many file segments are having to be transmitted individually.

Bottlenecks cause system slows, downtime and increased energy consumption—and without addressing bottlenecks such issues will not be avoided. Additionally, bottlenecks will cause tremendous expenditure in IT hours chasing them down, many times futilely if the actual cause is not tackled.

The right choice to be made by IT staff is to address the reading and writing of files right at the beginning, using performance technology designed to do so. There is even technology available today that prevents a majority of them before they even occur.

Before wasting time attempting to solve performance and network efficiency issues, make sure to address file system bottlenecks completely in the beginning.

Wednesday, May 18, 2011

Balancing the Powerful Simplicity of SAN with Fragmentation Prevention

As with everything in computing, one of the primary goals developers have for storage area networks (SANs) is simplicity. SAN continues to advance with the advent of self-optimization, virtualization, storage provisioning and other benefits which make the technology more automatic. As SAN becomes more accessible and affordable, more companies are taking advantage of its many benefits.

The whole idea of a SAN, of course, is moving storage away from production servers, thereby freeing up resources for active company traffic. High-speed fiber channel networks make it possible for the switch to SAN to be invisible to the user, simply because speeds are similar to direct attachment of disks. Again, the goal is simplicity.

If file fragmentation is not addressed with an equally simple and powerful approach, this goal becomes less obtainable. All of this advanced technology must still read files from hard drives. Due to the splitting of files into thousands or tens of thousands of fragments, unnecessary complexities are introduced into the scheme. And access speed—another prime goal of a SAN—is nullified.

The various technologies being employed to assist SANs in being faster and more automatic each have their own unique susceptibility to fragmentation. For example, thin provisioning allows advance allocation of disk space. But at the same time, the file system may simply write data wherever space is to be found. If data is written to a “high” logical cluster number (say, cluster 200), all clusters from zero to 200 will be allocated even if not used. When data is added to an old file, new files are added or deleted, or an old file is expanded, the difference between file system disk allocation and storage system thin provisioning can contribute to fragmentation, over-allocation, and less efficient use of storage space.

Another example is virtual machines. At the very least, file I/Os are passed through a host and a guest system; file fragmentation wreaks havoc by adding multiple I/Os where there are already plenty extra.

Technology such as SAN requires an equally powerful—and simple—fragmentation solution. Following the trend of full automaticity, the most beneficial solution today is one which functions fully automatically, in the background, requiring no intervention from IT personnel. It is now even possible to prevent a majority of fragmentation before it occurs, putting its complex issues thoroughly in the past.

A SAN can now bring a robust resource saving storage solution to any site. As a companion, the fragmentation solution employed must be comparably robust—and simple.