Architecture
1. Version 3.0 introduces SAN Farm and Matrix Link infrastructures.
Basically each SAN Farm is a collection of same type entities, e.g., Hosts, Devices, or Switches, so we really have just three types of SAN Farms, Host Farm, Device Farm, and Fabric Farm.
Matrix Link are physical layer link connectivity between SAN Farms. Version 3 support connectivity between Host Farm and Fabric Farm, or between Device Farm and Fabric Farm. There are two types of matrix link, pre-configured matrix link and user-configured matrix link:
- Pre-configured Matrix Link: This is the link statically defined between physical ports and matrix link terminators.
- User-configured Matrix Link: This is the user-defined link between matrix link terminators. This type of link eventually determines the connectivity between ports.
The following picture shows the concepts of SAN Farm and Matrix Link:
The ideas behind SAN Farm and Matrix Link are simple: to facilitate (1) scalable link configurations, and (2) flexible topology displays for more complex SAN configurations, for example, multi-datacenter SAN. SANFarms.ned and SimSANs_v3.ned files should give a clear picture of how they are implemented.
2. Version 3.0 also introduces SAN Entity concept.
A SAN Entity is a SAN device equipped with physical port(s) that can connect with other SAN Entities to facilitate SAN traffic initiation (initiator), transport (networking), and termination (target). Apparently, there are three types of SAN Entities: Host, Switch, and Storage Device.
2.1 Host entity consists of a collection of HBA Adapters (SAN Adapter), one SCSI module, one IO Generator module, and one CPU module.
- IO Generator: The SAN traffic source that user can configure to initiate IOs to which target LUNs with what kind of IO patterns (random, sequential, request size, outstanding count, etc). It seems similar to IO Meter but more simplified.
- SCSI Module: This is where SCSI Layer (Initiator) resides. Very basic SCSI commands are implemented to support SCSI device discovery and IO, including REPORT_LUN, INQUIRY, READ_CAPACITY, READ_10, and WRITE_10. The SCSI architecture and command reference can be found in SAM-2, SPC-3, and SBC-3 from T10 website. A simple multi-path logic is implemented via INQUIRY page 83 to indentify the uniqueness of target LUN WWN. More commands will be added.
- HBA Adapter: The actual networking transport layer that carries SCSI command and data packets to and from Storage Device. Currently only Fibre Channel protocol is implemented including FCP-2, FC-FS, FC-GS, and FC-LS, which can be found from T11 website. Note: the HBA driver computational cost will be part of host CPU module's task.
- CPU Module: This is where the primary computational cost by SCSI initiator layer and Port Driver layer is to be calculated.
NED file ClientHost.ned tells how the host entity is implemented. The following picture shows the concept of host entity.
2.2 Switch entity consists of a collection of Linecard modules (SAN Adapter) and one Switch Center module.
- Linecard Module: The actual networking transport layer to forwarding and routing SAN traffic frames. Currently only Fibre Channel protocol is implemented including FC-SW-2 and FC-GS. Later FC-BB-5 Clause 7 will be added to support FCoE. Standard can be found from T11 website.
- Switch Center: Four major functionalities are here: switching, routing, zoning service, and directory service (SNS).
NED file Fabric.ned tells how the switch entity is implemented. The following picture shows the concept of switch entity.
2.3 Storage Device entity consists of a collection of Front-end Interface Blades (SAN Adapter), one SCSI module, one Virtualization Engine module, and CPU module.
- FEIB (Front-end Interface Blade): The actual networking transport layer that carries SCSI command and data packets to and from Client Host. Currently only Fibre Channel protocol is implemented including FCP-2, FC-FS, FC-GS, and FC-LS, which can be found from T11 website. As for the driver computational cost, unlike host HBA adapter, separate FEIB processor module will handle the task.
- SCSI Module: This is where SCSI Layer (Target) resides. Incoming SCSI commands (from SCSI initiator) such as REPORT_LUN, INQUIRY, READ_CAPACITY, READ_10, and WRITE_10 will be handled at here. The SCSI architecture and command reference can be found in SAM-2, SPC-3, and SBC-3 from T10 website.
- VE (Virtualization Engine) Module: Target LUNs creation and assignment, client host association with initiators, and target controllers assignment will be handled here. Later more functionalities will be added when caching module as well as back-end storage module is supported.
- CPU Module: This is where the primary computational cost by SCSI target layer and VE module is to be calculated.
NED file StorageDevice.ned tells how the storage device entity is implemented. The following picture shows the concept of storage device entity.
3. The last major concept introduced in version 3.0 is SAN Adapter.
Each of the SAN entities share the same SAN Adapter structure, which contains a set of SAN Port, Port Firmware, and Port Driver, one ULP agent to interact with upper layer protocol (for example SCSI), and Driver Processor module.
- SAN Port: Physical and link access layer. For Fibre Channel Port (N_Port, F_Port, or E_Port), this layer is FC-FS, which primarily handle FC Framing operations. An Ethernet MAC layer will be implemented when FCoE support is added later.
- Port Firmware: Strictly speaking, this should be part of SAN Port. However, in SimSANs v3, it is separated out to be a bridging layer between SAN Port and Port Driver. Basically this layer (part of FC-FS) handles FC link events as well as Sequencing operations mapped from Port Driver layer.
- Port Driver: This layer is like FC application layer, including FC-LS, FC-GS, FCP (for both Initiator and Target), and FC-SW. The computational cost in this layer is counted.
- ULP Agent: A bridging layer between Port Driver and Upper layer such as SCSI layer and Switch Center.
- Driver Processor Module: This is where the primary computational cost by driver layer is to be calculated. Note: for Host entity, driver processor will call host CPU module to handle computational cost, while for Storage Device entity, driver processor is a dedicated module and won't call device CPU module to handle driver related computational cost.
NED file SANAdapter.ned tells how the SAN Adapter is implemented. The following picture shows the concept of SAN Adapter.