What Is a Storage Area Network? SAN Explained – TechTarget

A storage area network (SAN) is a dedicated high-speed network or subnetwork that interconnects and presents shared pools of storage devices to multiple servers.
The availability and accessibility of storage are critical concerns for enterprise computing. Traditional direct-attached disk deployments within individual servers can be a simple and inexpensive option for many enterprise applications, but the disks — and the vital data those disks contain — are tied to the physical server across a dedicated interface, such as SAS. Modern enterprise computing often demands a much higher level of organization, flexibility and control. These needs drove the evolution of the storage area network (SAN).
SAN technology addresses advanced enterprise storage demands by providing a separate, dedicated, highly scalable high-performance network designed to interconnect a multitude of servers to an array of storage devices. The storage can then be organized and managed as cohesive pools or tiers. A SAN enables an organization to treat storage as a single collective resource that can also be centrally replicated and protected, while additional technologies, such as data deduplication and RAID, can optimize storage capacity and vastly improve storage resilience — compared to traditional direct-attached storage (DAS).
Simply stated, a SAN is a network of disks that is accessed by a network of servers. There are several popular uses for SANs in enterprise computing. A SAN is typically employed to consolidate storage. For example, it’s common for a computer system, such as a server, to include one or more local storage devices. But consider a data center with hundreds of servers, each running virtual machines that can be deployed and migrated between servers as desired. If the data for one workload is stored on that local storage, the data might also need to be moved if the workload is migrated to another server or restored if the server fails. Rather than attempt to organize, track and use the physical disks located in individual servers throughout the data center, a business might choose to move storage to a dedicated storage subsystem, such as a storage array, where the storage can be collectively provisioned, managed and protected.
A SAN can also improve storage availability. Because a SAN is essentially a network fabric of interconnected computers and storage devices, a disruption in one network path can usually be overcome by enabling an alternative path through the SAN fabric. Thus, a single cable or device failure doesn’t leave storage inaccessible to enterprise workloads. Also, the ability to treat storage as a collective resource can improve storage utilization by eliminating “forgotten” disks on underutilized servers. Instead, a SAN offers a central location for all storage, and enables administrators to pool and manage the storage devices together.
All of these use cases can enhance the organization’s regulatory compliance, disaster recovery (DR) and business continuity (BC) postures by improving IT’s ability to support enterprise workloads. But to appreciate the value of SAN technology, it’s important to understand how a SAN differs from traditional DAS.
With DAS, one or more disks are directly connected to a specific computer through a dedicated storage interface, such as SATA or SAS. The disks are often used to hold applications and data intended to run on that specific server. Although the DAS devices on one server can be accessed from other servers, the communication takes place over the common IP network — the LAN — alongside other application traffic. Accessing and moving large quantities of data through the everyday IP network can be time-consuming, and the bandwidth demands of large data movements can affect the performance of applications on the server.
A SAN operates in a profoundly different manner. The SAN interconnects all the disks into a dedicated storage area network. That dedicated network exists separate and apart from the common LAN. This approach enables any of the servers connected to the SAN to access any of the disks attached to the SAN, effectively treating storage as a single collective resource. None of the SAN storage data needs to pass across the LAN — mitigating LAN bandwidth needs and preserving LAN performance. Because the SAN is a separate dedicated network, the network can be designed to emphasize performance and resilience, which are beneficial to enterprise applications.
A SAN can support a huge number of storage devices, and storage arrays — specially designed storage subsystems — that support a SAN can scale to hold hundreds or even thousands of disks. Similarly, any server with a suitable SAN interface can access the SAN and its vast storage potential, and a SAN can support many servers. There are two principal types of networking technologies and interfaces employed for SANs: Fibre Channel and iSCSI.
A SAN is essentially a network that is intended to connect servers with storage. The goal of any SAN is to take storage out of individual servers and locate the storage collectively where storage resources can be centrally managed and protected. Such centralization can be performed physically, such as by placing disks into a dedicated storage subsystem like a storage array. But centralization can also be increasingly handled logically through software — such as VMware vSAN — which relies on virtualization to find and pool available storage.
By connecting the collective storage to servers through a separate network — apart from the traditional LAN — storage traffic performance can be optimized and accelerated because the storage traffic no longer needs to compete for LAN bandwidth needed by servers and their workloads. Thus, enterprise workloads can potentially get faster access to astonishing volumes of storage. A SAN is generally perceived as a series of three distinct layers: a host layer, a fabric layer and a storage layer. Each layer has its own components and characteristics.
A SAN also employs a series of protocols enabling software to communicate or prepare data for storage. The most common protocol is the Fibre Channel Protocol (FCP), which maps SCSI commands over FC technology. The iSCSI SANs will employ an iSCSI protocol that maps SCSI commands over TCP/IP. But there are other protocol combinations, such as ATA over Ethernet, which maps ATA storage commands over Ethernet, as well as Fibre Channel over Ethernet (FCoE) and other lesser-used protocols — including iFCP, which maps FCP over IP, and iSCSI Extensions for RDMA , which maps iSCSI over InfiniBand. SAN technologies will often support multiple protocols, helping to ensure that all layers, operating systems and applications are able to communicate effectively.
To integrate all components of the SAN, an enterprise first must meet the vendor’s hardware and software compatibility requirements:
Then, to set up the SAN, you need to do the following:
The core of a SAN is its fabric: the scalable, high-performance network that interconnects hosts — servers — and storage devices or subsystems. The design of the fabric is directly responsible for the SAN’s reliability and complexity. At its simplest, an FC SAN can simply attach HBA ports on servers directly to corresponding ports on SAN storage arrays, often using optical cables for top speed and support for networking over greater physical distances.
But such simple connectivity schemes belay the true power of a SAN. In actual practice, the SAN fabric is designed to enhance storage reliability and availability by eliminating single points of failure. A central strategy in creating a SAN is to employ a minimum of two connections between any SAN elements. The goal is to ensure that at least one working network path is always available between SAN hosts and SAN storage.
Consider a simple example in the image above where two SAN hosts must communicate with two SAN storage subsystems. Each host employs a separate HBA — not a multiport HBA because the HBA device itself is a single point of failure. The port from each HBA is connected to a port on a different SAN switch, such as Fibre Channel switch. Similarly, multiple ports on the SAN switch connect to different storage target devices or systems. This is a simple redundant fabric; remove any one connection in the diagram, and both servers can still communicate with both storage systems to preserve storage access for the workloads on both servers.
Consider the basic behavior of a SAN and its fabric. A host server requires access to SAN storage; the host will internally create a request to access the storage device. The traditional SCSI commands used for storage access are encapsulated into packets for the network — in this case FC packets — and the packets are structured according to the rules of the FC protocol. The packets are delivered to the host’s HBA where the packets are placed onto the network’s optical or copper cables. The HBA transmits the request packet(s) to the SAN where the request will arrive at the SAN switch(es). One of the switches will receive the request and send it along to the corresponding storage device. In a storage array, the storage processor will receive the request and interact with storage devices within the array to accommodate the host’s request.
The SAN switch is the focal point of any SAN. As with most network switches, the SAN switch receives a data packet, determines the source and destination of the packet and then forwards that packet to the intended destination device. Ultimately, the SAN fabric topology is defined by number of switches, the type of switches — such as backbone switches, or modular or edge switches — and the way in which the switches are interconnected. Smaller SANs might use modular switches with 16, 24 or even 32 ports, while larger SANs might use backbone switches with 64 or 128 ports. SAN switches can be combined to create large and complex SAN fabrics that connect thousands of servers and storage devices.
A fabric alone isn’t enough to ensure storage resilience. In actual practice, the storage systems must include an assortment of internal technologies, including RAID — disk groupings for more capacity and resilience — with robust error handling and self-healing capabilities. The storage system will typically add more technologies for efficient storage utilization, including thin provisioning, snapshots or storage cloning, data deduplication and data compression. Although a well-designed SAN fabric allows any host to reach any storage device, isolation techniques — such as zoning and LUN masking — can be used to restrict host access to some LUNs for better storage performance and security across the SAN.
Although SAN technology has been available for decades, there are several enhancements and dedicated improvements reshaping SAN design and deployment. These alternatives include virtual SAN, unified SAN, converged SAN and hyper-converged infrastructure (HCI).
Whether traditional or virtual, a SAN offers several compelling benefits that are vital for enterprise-class workloads.
But despite the benefits, SANs are hardly perfect, and there is an assortment of potential disadvantages for IT leaders to consider before deploying or upgrading a SAN.
Network-attached storage (NAS) is an alternative means of storing and accessing data that relies on file-based protocol, such as SMB and NFS — as opposed to the block-based protocols such as FC and iSCSI used in SANs. There are other differences between a SAN and NAS. Where SAN uses a network to connect servers and storage, a NAS relies on a dedicated file server located between servers and storage.
Although both approaches store data, the choice of system will depend on the type of data being handled. A SAN is the preferred choice for block-based data storage, which usually applies well to structured data — such as storage for enterprise-class relational database applications. By comparison, a NAS — with its file-based approach — is better suited to unstructured data — such as document files, emails, images, videos and other common types of files. 
As with a SAN, a NAS consolidates storage in one place and can support data management and protection tasks, such as data archiving and backup. Yet a NAS uses a common network and commands far lower costs and complexity than SANs. However, SANs shine in raw performance and scalability, able to deliver top performance to the most demanding enterprise applications.
SAN and NAS are not mutually exclusive. It is possible for a SAN and NAS to co-exist in the same data center where both block- and file-based data storage is required. Both SAN and NAS deployments can be upgraded to boost performance, streamline management, combat shadow IT and address storage capacity limitations. In some cases, separate storage systems might be replaced with a unified storage system, or the SAN network might be simplified using an iSCSI SAN.
There is no shortage of vendors and products to support enterprise SAN deployments. When planning a SAN, architects will typically consider the hosts (servers), network (fabric), components and storage subsystems.
Hosts. Any host can operate on the SAN, but every host server requires a suitable network interface to access the fabric. Enterprise-class servers can be purchased with multi-port FC HBAs already installed — a common tactic for technology refresh projects. If the servers don’t already incorporate an HBA, an HBA can be added as a server upgrade project. However, adding an HBA as an aftermarket upgrade will require an available PCIe slot on the server’s motherboard. IT staff must survey each target server and ensure that a suitable upgrade slot is physically available before purchasing and proceeding with the upgrades. In addition, such upgrades will require the server to be powered off, so IT staff must schedule server downtime and plan for such disruptive upgrades.
HBA cards are commonly manufactured based on core communications chips from technology leaders, including Agilent, ATTO, Broadcom, Brocade and QLogic. The actual HBAs are manufactured and sold through many technology vendors and procurement channels.
Network. The SAN fabric itself is composed of optical or copper cabling as well as networking components, such as network switches. Like HBAs, suitable cabling is readily available through common technology vendors and procurement channels. Both edge and director switches can be found based on technologies used by major chip and technology manufacturers. Examples include the following:
Storage. Storage arrays get attention in the SAN because storage is the entire point of SAN technology, and storage subsystems possess many of the functions — deduplication, replication and so on — that make SANs so attractive to the enterprise. Here’s a sampling of major storage array vendors:
Other notable SAN storage vendors include Fujitsu, Lenovo, Oracle and Western Digital. Newer SAN vendors focusing on all-flash storage include Kaminario, Pure Storage, IntelliFlash — previously Tegile — and Violin Systems.
When it comes to storage, don’t overlook the potential value of SAN as a service. The idea is similar in principle to any cloud or SaaS offering, which is sold to customers as a managed service. A provider builds and administers a SAN and then proceeds to sell capacity on that SAN to outside customers. The provider is responsible for building and maintaining the SAN — and its features such as replication — and customers can access one or more LUNs created for them on the provider’s SAN, usually for a recurring monthly fee. SAN services are often sold in conjunction with other managed data services.
Several industry groups have developed standards related to SAN technology, including the Storage Networking Industry Association, which promotes the Storage Management Initiative Specification. SMI-S, as the standard is known, is intended to facilitate the management of storage devices from multiple vendors in storage area networks.
The Fibre Channel Industry Association also promotes standards related to SAN, including the Fibre Channel Physical Interface standard, supporting deployments of 64 GFC and Gen 7 solutions for the SAN market, the fastest industry standard networking protocol that enables storage area networks of up to 128 GFC.
A SAN poses serious management challenges. The physical network can be complex and requires constant oversight. In addition, the logical network configuration — such as LUN masking, zoning and SAN-specific functions, such as replication and deduplication — can change and demand regular attention. To keep the SAN at peak performance, SAN administrators should consider several management best practices.
Some of the most meaningful practices will use SAN monitoring and reporting. Administrators should take the time to review metrics or key performance indicators (KPIs) in several areas of the SAN:
By implementing a regular review process and taking advantage of alerts and reporting features within the SAN, an administrator can ensure a clear view of the SAN’s health and take proactive measures to keep the SAN operating properly.
In addition, SAN management can benefit from features and functionality designed to automate the SAN or mitigate storage disruptions. As examples, SANs that allow the use of policies for tasks such as provisioning and data protection can help administrators avoid oversights and mistakes that could waste storage or jeopardize security. Similarly, using features such as native replication can help protect valuable data while maintaining constant access to that data.
Remote SAN management is a growing requirement for SAN administration. This enables SANs to be built outside of the main data center in remote locations or a single SAN administrator to support one or more SANs from anywhere in the world. Remote SAN management demands a reliable network connection between the management tool — the administrator — and the SAN being managed. The remote tool should be able to convey comprehensive SAN health details, such as the KPIs mentioned above, support provisioning and be able to launch diagnostics to help locate and eliminate potential SAN problems. Common remote SAN tools include SolarWinds Storage Resource Monitor, IntelliMagic Vision for SAN and EG Innovations Infrastructure Monitoring.
The Komprise Intelligent Data Management platform gains new DR capabilities that enable granular recovery of file and object …
Organizations can bolster and validate the expertise of their IT teams with a variety of certifications. There are several …
Testing is a critical part of the disaster recovery planning process. Without proper testing, IT teams might miss crucial updates…
Anthropic’s LLM Claude powers a new generative AI feature in HYCU’s R-Cloud to automate code that can help deliver SaaS data …
As legal woes mount against popular LMMs powering generative AI, backup admins should work with IT to set safe and effective …
In 2024, generative AI and machine learning, along with employee education, are important tools to prevent the spread of …
Astera Labs is using the CXL interface in its now longer Smart Cable Module to spread out energy consumption while enabling GPU …
Colocation companies offer a wide range of facilities and services that can help reduce costs of managing data centers. Compare …
Hyper-converged infrastructure is rapidly changing. Read what HCI has to offer and what projected growth it may have within the …
Hilti Group’s move to circular manufacturing helps with sustainability targets while building on existing processes and helping …
In 2024, environmental sustainability will be thrust into the spotlight. Learn what’s driving that change and what some of the …
Organizations are shifting IT priorities toward efficient resource consumption. Learn how strategies prioritizing sustainability …
All Rights Reserved, Copyright 2000 – 2024, TechTarget

Privacy Policy
Cookie Preferences
Do Not Sell or Share My Personal Information

source

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top