Home            School            Articles            University            Online Education            Retirement            Home Care             Scholarships           


Three Models for Your Storage Architecture

In the January issue of Network Magazine, Rob Melloy, European manager at network storage company Auspex Systems, examines some of the different storage options available to network managers.

Drawing an analogy with earlier methods of storing and retrieving music -- such as LPs and 8-track tapes -- he notes that it hasn’t always been the lure of higher fidelity that appeals to the music-buying public. Instead, it is more an issue of economics. Following that line of logic, Melloy writes, data managers often buy a particular type of storage not because of superior technology, but because of cost-effectiveness and, to some extent, availability.

Why is this so? Melloy thinks that current technology gives storage vendors, systems integrators, and network managers a much more flexible approach to the varied needs of data creators and users. “High-density hard drives, high-speed networks, and easy-to-use management tools allow network storage vendors to deliver better products at a better price-to-performance ratio than was previously thought possible,” he writes. But he adds that storage has become such an important part of a company’s IT strategy that price alone is no longer the prime factor in determining which product to buy and which architecture to use.

Melloy says that the most important consideration is “how you integrate scaleable, manageable, and reliable storage into your infrastructure now that end users, be they internal or external, have come to expect 100 percent availability.”

According to the article, there have been two major shifts in technology that can lead unprepared organizations to make costly mistakes. The first is the impact of networking technology on storage architecture and data management. Melloy writes that as networks have become faster, the bottleneck has shifted from the network to the server and its associated hard drives.

The second is the impact of parallel processing on storage product design. Server, storage, and networking vendors have implemented significantly different product designs to take advantage of parallel processing technology for Direct Attached Storage, Network Attached Storage, and Storage Area Network products, says Melloy. He makes the following distinctions among them:

The DAS model evolved in the server industry and can be interpreted as the way computer systems worked before networks, with data stored in isolation from other systems. DAS is appropriate for both low-end PC and high-performance mainframe applications. It’s also suitable for certain compute-intensive online transaction processing database applications.
NAS grew up in the networking industry and is best for Unix and Windows NT data-sharing applications, consolidated file servers, technical and scientific applications, the Internet, and certain types of decision support applications.
SAN was developed in the storage industry. Because of the current lack of standards, all SAN configurations are proprietary. Whether these schemes will be interoperable in the long term is still not apparent. Although the long-term vision is for interoperability among heterogeneous servers and storage products, Melloy says that it is advisable to implement early SAN applications in a homogeneous environment using a solution from a single vendor, such as EMC, Hitachi Data Systems, or Compaq Computer.
The article explains that, although the SAN vision involves many benefits now available on NAS, there are certain synergies with existing enterprise operational and management tools that have led early adopters to experiment with test deployments. SAN is ideal for applications that don’t require true data sharing, a feature unlikely to be available until SAN standards evolve to the same level as current NAS. It is also appropriate for applications where the security risks inherent in Fibre Channel can be managed, and where performance bottlenecks arising from Fibre Channel node and link congestion can be avoided.

“Determining the best storage architecture for specific applications can be confusing because the various architectures evolved separately out of three different industries,” says Melloy. “The result is a conflict in vendor claims and different product designs that bring their own benefits and drawbacks to different applications.”

He adds that while analysts predict a major shift from DAS and NAS over the next five years, many enterprises still use general-purpose servers to handle files in Unix and NT environments. “This has drawbacks in availability, performance, scalability, and manageability, compared to specialist products from vendors such as Network Appliance, EMC, and Auspex Systems.” He notes that such solutions may be more suitable for dedicated network file serving.

Further german information can be found here.