eng    ger 

phone  Contact: +45 44 99 44 11​

mail  E-mail: info@dvtrading.com 

Enterprise Storage Systems​

IBM 2107 System Storage DS8000 Series

The IBM 2107 System Storage DS8000 series is designed to provide unmatched functionality, flexibility, and performance for enterprise disk storage systems at improved levels of cost effectiveness. The DS8000 series incorporates IBM`s POWER5 processor technology either in a dual two-way processor-complex offering, the DS8100 Model 921 or, for customers requiring additional performance or capacity, two dual four-way processor-complex offerings, the DS8300 Models 922 and 9A2. The DS8300 Model 9A2 further demonstrates IBM`s unique ability to converge server and storage technologies by enabling the creation of multiple IBM System Storage LPARs (Logical Partitions). These System Storage LPARs can be used for completely separate production or test environments in a single physical system. This may enable you to purchase one storage server in environments where more than one may have been needed in the past.

The IBM 2107 System Storage DS8000 series also offers three Turbo Models - the IBM System Storage DS8100 Turbo Model 931, and IBM System Storage DS8300 Turbo Models 932 and 9B2. The IBM System Storage DS8000 Turbo Models are designed to deliver similar enterprise-class storage capabilities, and functionality of the current DS8000 models (DS8100 Model 921, and DS8300 Models 922, and 9A2).​

Model Abstract 2107-9AE

The IBM 2107 System Storage DS8100 Model 9AE is the expansion unit for the System Storage DS8100 Model 9A2 and 9B2. It holds up to 256 disk drives for a maximum capacity of up to 76.8 TB. It also supports up to 16 ESCON or Fibre Channel/FICON adapters.​​

Model Summary Matrix​

Model​

9AE

Processor​

N/A

Physical Capacity​

76.8 TB

Disk Drives​

256

Processor Memory​

N/A

Host Adapters

16

Attaches to​

9A2

Maximum

The Model 921 and 931 support the attachment of one Model 92E expansion unit.

The Model 922 and 932 support the attachment of up to two Model 92E expansion units.

The Model 9A2 and 9B2 support the attachment of up to two Model 9AE expansion units.

Model Abstract 2107-9A2

The IBM 2107 System Storage DS8100 Model 9A2 is a base unit that offers a dual four-way processor complex and supports the use of two System Storage LPARs. It holds up to 128 disk drives for a maximum capacity of up to 64 TB. It also supports up to 256 GB of processor memory and up to 16 ESCON or Fibre Channel/FICON adapters. With additional optional DS8000 Model 9AE expansion units, it can scale up to 640 disk drives, for a maximum capacity of up to 320 TB.

Model Summary Matrix​

Model​

9A2

Processor​

Four-way

Physical Capacity​

64 TB

Disk Drives​

128

Processor Memory​

256 GB

Host Adapters

16

Attaches to​

9AE (1 or 2)

Maximum

The Model 921 supports the attachment of one Model 92E expansion unit.

The Model 922 supports the attachment of up to four Model 92E expansion units.

The Model 9A2 supports the attachment of up to four Model 9AE expansion units.

Model Abstract 2107-92E

The IBM 2107 System Storage DS8000 Model 92E is the expansion unit for the System Storage DS8100 Model 921 and DS8300 Model 922, as well as the DS8100 Turbo Model 931, and DS8300 Turbo Model 932. It holds up to 256 disk drives for a maximum capacity of up to 128 TB. It also supports up to 16 ESCON or Fibre Channel/FICON adapters.

Model Summary Matrix​

Model​

9AE

Processor​

N/A

Physical Capacity​

128 TB

Disk Drives​

256

Processor Memory​

N/A

Host Adapters

16

Attaches to​

921 or 922

Maximum

The Model 921 supports the attachment of one Model 92E expansion unit.

The Model 922 supports the attachment of up to two Model 92E expansion units.

The Model 9A2 supports the attachment of up to two Model 9AE expansion units.

Model Abstract 2107-921

The IBM 2107 System Storage DS8100 Model 921 is a base unit that offers a dual two-way processor complex and holds up to 128 disk drives for a maximum capacity up to 64 TB. It also supports up to 128 GB of processor memory and up to 16 ESCON or Fibre Channel/FICON adapters. With an optional expansion unit (DS8000 Model 92E), a single DS8100 Model 921 solution supports up to 384 disk drives, for a total capacity of up to 192 TB.

Model Summary Matrix​

Model​

921

Processor​

Two-way

Physical Capacity​

64 TB

Disk Drives​

128

Processor Memory​

128 GB

Host Adapters

16

Attaches to​

1 92E

Maximum

The Model 921 supports the attachment of one Model 92E expansion unit.

The Model 922 supports the attachment of up to four Model 92E expansion units.

The Models 9A2 supports the attachment of up to four Model 9AE expansion units.

Model Abstract 2107-922

The IBM 2107 System Storage DS8100 Model 922 is a base unit that offers a dual four-way processor complex and holds up to 128 disk drives for a maximum capacity of up to 64 TB. It also supports up to 256 GB of processor memory and up to 16 ESCON or Fibre Channel/FICON adapters. With additional optional expansion units (DS8000 Model 92E), it can scale up to 640 disk drives for a maximum capacity of 512 TB.

Model Summary Matrix​

Model​

922

Processor​

Four-way

Physical Capacity​

64 TB

Disk Drives​

128

Processor Memory​

256 GB

Host Adapters

16

Attaches to​

92E (1 or 2)

Maximum

The Model 921 supports the attachment of one Model 92E expansion unit.

The Model 922 supports the attachment of up to two Model 92E expansion units.

The Model 9A2 supports the attachment of up to two Model 9AE expansion units.​

Model Abstract 2107-931

The 2107 IBM System Storage DS8100 Turbo Model 931 has a two-way processor, a physical capacity of 64 TB, 128 disk drives, 128 GB processor memory, 16 host adapters, and one 9xE attachment.

Model Abstract 2107-932

The 2107 IBM System Storage DS8300 Turbo Model 932 has a four-way processor, a physical capacity of 64 TB, 128 disk drives, 256 GB processor memory, 16 host adapters, and one or two 9xE attachments.​

Model Abstract 2107-9B2

The 2107 IBM System Storage DS8300 Turbo Model 9B2 has a four-way processor, a physical capacity of 64 TB, 128 disk drives, 256 GB processor memory, 16 host adapters, one or two 9xE attachments, and supports multiple storage system LPARs (logical partitions).

Model Summary Matrix​

Model

931

Processor​

Two-way

Physical Capacity​

64 TB

Disk Drives​

128

Processor Memory

128 GB

Host Adapters​

16

Attach

9xE: 1

Misc.​

-

Model

932

Processor​

Four-way

Physical Capacity​

64 TB

Disk Drives​

128

Processor Memory

256 GB

Host Adapters​

16

Attach

9xE: 1 or 2

Misc.​

-

Model

9B2

Processor​

Four-way

Physical Capacity​

64 TB

Disk Drives​

128

Processor Memory

256 GB

Host Adapters​

16

Attach

9xE: 1 or 2

Misc.​

(1)

Note: (1) This model supports multiple storage system LPARs (logical partitions).​

IBM 2105 Enterprise Storage Server Model 800​

Model Abstract 2105-800​

The IBM 2105 ESS Model 800, the third generation of IBM intelligent storage, integrates a new generation of hardware, including faster symmetrical multi-processors (SMP) with an optional Turbo feature, 64 GB cache, double internal bandwidth, and 2Gb Fibre Channel/FICON host adapters. This hardware, in addition to RAID 10 support and 15,000 rpm drives, enables the Model 800 to deliver excellent levels of performance and throughput.

Only previously installed, refurbished ESS Model 800 machines are available. The refurbished Model 800 supports the same features and functions as the Model 800, with the exception of the ESS Standby Capacity on Demand offering. These refurbished machines include a three-year warranty and are offered on an as-available basis. The special bid process must be used by IBM representatives and IBM Business Partners to obtain availability and pricing information, and feature number 9940 (Refurbish Product Indicator) is required when placing an order for a refurbished Model 800.

The ESS Model 800 - The Third Generation of IBM Intelligent Storage

The ESS Model 800 is the third generation of the ESS and builds upon the functionality, stability, reliability, and proven track record of the earlier models.

The Model 800 integrates a new generation of hardware, including a standard processor (with an optional Turbo and Turbo II feature), 64 GB cache, 2 GB Non Volatile Storage (NVS), increased internal bandwidth, and 2Gb Fibre Channel/FICON Host Adapters. This hardware, when combined with RAID 10 support and 15,000 rpm drives, enables the Model 800 to deliver excellent levels of performance and throughput.

The Model 800 supports up to 55.9 TB of physical capacity that can be configured as RAID 5, RAID 10, or a combination of both. RAID 5 remains a price/performance leader, offering excellent performance for most customer applications, while RAID 10 can offer better performance for selected applications. Price, performance, and capacity can further be optimized to meet specific application and business requirements through the intermix of 18.2, 36.4, 72.8, and 145.6 GB drives operating at 10,000 or 15,000 rpm.

Yet with all this, the fundamental design of the ESS remains unchanged. The Model 800 supports 24 x 7 operations with a design that avoids single points of failure by providing component redundancy. The Model 800 also maintains the advanced functions that deliver business continuance solutions (FlashCopy, FlashCopy Version 2, PPRC, PPRC Version 2, PPRC Extended Distance, and XRC), as well as FICON, Parallel Access Volumes (PAV), multiple allegiance, and I/O priority queuing to offer great performance in the zSeries and S/390 environments.

15,000 rpm Drives Offer Additional Levels of Throughput and Perforance

The 15,000 rpm drives, which are available in 18.2, 36.4, and 72.8 GB capacities, offer levels of throughput and performance that can translate into price/performance benefits for the entire system.

  • An ESS populated with eight RAID 5 ranks of 18.2 or 36.4 GB, 15,000 rpm drives can provide up to 80% greater total system random throughput for a cache standard workload than a comparably configured ESS with 10,000 rpm drives.
  • The 72.8 GB, 15,000 rpm drives can provide up to 20% greater random throughput than a comparably configured ESS with 10,000 rpm drives.
  • An ESS populated with eight RAID 5 ranks of 72.8 GB, 15,000 rpm drives can provide up to 25% better service time for a cache standard workload than a similarly configured ESS with comparable capacity consisting of 16 ranks of 36 GB, 10,000 rpm drives.

These drives can help you drive your workloads at significantly higher access densities (without performance degradation) and fewer disk drives may be required to achieve high disk utilization rates, which can lead to cost savings. Reduced response times may also be realized, providing shorter batch processing windows or improved productivity because transactions complete more quickly.

Extensive Connectivity with 2Gb Fibre Channel/FICON Support For Higher Performance SANs

The Model 800 follows in the footsteps of the earlier ESS models with extensive connectivity support -- including Fibre Channel, FICON, ESCON, SCSI, iSCSI, and NAS -- across a broad range of server environments -- including zSeries and S/390, pSeries and RS/6000, iSeries and AS/400, xSeries and other Intel-based servers, Sun, Hewlett-Packard, and Compaq AlphaServer. This rich support of heterogeneous environments and attachments, along with the flexibility to easily partition the ESS storage capacity among the attached environments in any combination using the IBM TotalStorage Enterprise Storage Server Specialist (ESS Specialist), helps support storage consolidation requirements and dynamic, changing environments.

The Model 800 further enhances this broad set of connectivity options with support for 2Gb Fibre Channel and FICON. The 2Gb Fibre Channel/FICON Host Adapters, which are offered in long-wave and short-wave, auto-negotiate to either 2Gb or 1Gb link speeds. The flexibility enables immediate exploitation of the benefits offered by higher performance, 2Gb SAN-based solutions, while also maintaining compatibility with existing 1Gb infrastructures. As 2Gb capability on servers and fabric components is introduced into existing SANs, your investment in ESS fibre adapters is protected and migration is simplified.​

IBM TotalStorage Resiliency Family

The IBM Total Storage Resiliency Family is comprised of IBM TotalStorage Resiliency Core Technology, an extensive set of hardware and software features and products, and IBM TotalStorage Resiliency Automation, integrated software and services packages. The IBM TotalStorage Resiliency Family is designed to help you implement storage infrastructures to help keep your business running 24 hours a day, 7 days a week.

The following ESS Model 800 functions are key components of the IBM TotalStorage Resiliency Core Technology:

  • FlashCopy
  • PPRC:
  • PPRC Metro Mirror (Synchronous PPRC)
  • PPRC Global Mirror (Asynchronous PPRC)
  • PPRC Metro/Global Copy (Asynchronous Cascading PPRC)
  • PPRC Global Copy (PPRC Extended Distance)​

  • Extended Remote Copy (XRC)
  • zSeries Global Mirror (XRC)
  • zSeries Metro/Global Mirror (three-site solution using Synchronous PPRC and XRC)

FlashCopy: FlashCopy is designed to provide a point-in-time copy capability for logical volumes. FlashCopy creates a physical point-in-time copy of the data, with minimal interruption to applications, and makes it possible to access both the source and target copies immediately. FlashCopy is an optional feature for the ESS (feature numbers 83xx).

FlashCopy Version 2 further supports business continuance solutions with the following additional functions:​

  • Data Set FlashCopy
  • Multiple Relationship FlashCopy
  • Incremental FlashCopy
  • FlashCopy to a PPRC primary
  • Additional enhancements:
  • Elimination of the Logical Subsystem (LSS) constraint
  • Establish time improvement
  • Consistency group commands
  • Inband commands over PPRC link

Data Set FlashCopy allows you to perform a FlashCopy of a data set in zSeries and S/390 environments. This level of granularity allows for more efficient use of your ESS capacity and can also help reduce the background copy completion time since a FlashCopy no longer needs to be performed at a volume level when a copy of selected data sets within a volume is required. As with FlashCopy at the volume level, Data Set FlashCopy is fully supported by z/OS Data Facility Storage Management Subsystem (DFSMS).​

With Multiple Relationship FlashCopy, a source can have FlashCopy relationships with multiple targets

simultaneously. This flexibility allows you to initiate up to 12 FlashCopy establishes on a given logical unit number (LUN), volume, or data set, without needing to first wait for or cause previous relationships to end.​

Incremental FlashCopy provides the capability to "refresh" a LUN or volume involved in a FlashCopy relationship. When a subsequent FlashCopy establish is initiated, only the data required to bring the target current to the source`s newly established point-in-time is copied. The direction of the "refresh" can also be reversed, in which case the LUN or volume previously defined as the target becomes the source for the LUN or volume previously defined as the source (and is now the target). Incremental FlashCopy can help reduce the background copy completion time when only a subset of data on either the source or target has changed, giving you the option to perform a FlashCopy on a more frequent basis. Additionally, if no updates were made to the target since the last "refresh", the direction change could be used to "restore" the source back to the previous point-in-time state.​

FlashCopy to a PPRC primary lets you establish a FlashCopy relationship where the FlashCopy target is also a PPRC primary volume. This enables you to create full or incremental point-in-time copies at a local site and then use PPRC to copy the data to the remote site.​

FlashCopy Version 2 is an optional feature to the ESS (feature numbers 86xx). In addition to supporting the functions listed above, FlashCopy Version 2 includes support for all functions offered with FlashCopy (feature numbers 83xx).​​

Peer to Peer Remote Copy (PPRC): PPRC is designed to offer hardware-based remote copy solutions. Synchronous PPRC (PPRC Metro Mirror) is designed to provide real-time mirroring of logical volumes between two ESSs that can be located up to 103 km from each other when using ESCON communication links and up to 300 km when using Fibre Channel communication links. It is a synchronous copy solution where write operations are completed on both copies (local and remote site) before they are considered to be done. PPRC also includes PPRC Extended Distance (PPRC Global Copy), a non-synchronous long-distance copy option for data migration and backup. PPRC is an optional feature to the ESS (feature numbers 82xx).​

PPRC Version 2 further supports business continuance solutions with following additional functions:​

  • PPRC over Fibre Channel links
  • Asynchronous PPRC (PPRC Global Mirror)
  • Asynchronous Cascading PPRC (PPRC Metro/Global Copy)

PPRC over Fibre Channel links enables the use of Fibre Channel as the communications link between the PPRC primary and PPRC secondary machines. Compared to PPRC over ESCON links, PPRC over Fibre Channel allows a reduction in PPRC link infrastructure, while delivering equivalent or better performance. PPRC communication to the secondary control unit, via Fibre Channel, provides for a much faster interface compared to ESCON, particularly at long distances. Within a single machine, you can run PPRC over both Fibre Channel and ESCON links, although the link type cannot be mixed within the same LSS. The Fibre Channel ports used for PPRC can be configured as either a dedicated PPRC link or as a shared port between PPRC and Fibre Channel Protocol (FCP) data traffic.​

PPRC Global Mirror is designed to provides a long-distance remote copy solution across two sites using asynchronous technology. It operates over high-speed, Fibre Channel communication links and is designed to provide the following:​

  • Support for virtually unlimited distance between the local and remote sites, with the distance typically limited only by the capabilities of the network and channel extension technologies. This can better enable you to choose your remote site location based on business needs and enables site separation to add protection from localized disasters.
  • A consistent and restartable copy of the data at the remote site, created with minimal impact to applications at the local site. Compared to Asynchronous Cascading PPRC (PPRC Metro/Global Copy), PPRC Global Mirror eliminates the requirement to do a manual and periodic suspend at the local site order to create a consistent and restartable copy at the remote site.
  • Data currency where, for many environments, the remote site lags behind the local site an average of 3 to 5 seconds, minimizing the amount of data exposure in the event of an unplanned outage. Data currency is also known as the recovery point objective (RPO). The actual lag in data currency experienced will depend upon a number of factors, including specific workload characteristics and bandwidth between the local and remote sites.
  • Dynamic selection of the desired RPO based upon business requirements and optimization of available bandwidth.
  • Session support whereby data consistency at the remote site is internally managed across up to eight ESS machines located among the local and remote sites.
  • Efficient synchronization of the local and remote sites with support for PPRC failover and failback modes, helping to reduce the time required to switch back to the local site after a planned or unplanned outage.

PPRC Metro/Global Copy is designed to provide a long-distance remote copy solution across three sites using a combination of PPRC Metro Mirror and PPRC Global Copy. In a three-site configuration, the PPRC Metro Mirror is maintained between an ESS at the local site and an ESS at an intermediate site, while a PPRC Global Copy relationship is simultaneously maintained between the ESS at the intermediate site and an ESS at the remote site. When used with specific operational procedures, this can be designed to provide a data protection solution in the event of an unplanned outage at any one of the three sites. Alternatively, operational procedures, including the creation of a "safety copy" at the remote site using FlashCopy, can be used to design a solution that provides a consistent copy of the data at the remote site in the event of a regional outage affecting both the local and intermediate site.​

PPRC Version 2 is an optional feature to the ESS (feature numbers 85xx). In addition to supporting the functions listed above, PPRC Version 2 also includes support for all functions offered with PPRC (feature numbers 82xx).

Extended Remote Copy (XRC): XRC (zSeries Global Mirror) is a combined hardware and software business continuance solution for the zSeries and S/390 environments providing asynchronous mirroring between two ESSs at global distances. XRC is an optional feature on the ESS (feature numbers 81xx).​

FICON Extends the Performance Benefits of the Model 800

FICON extends the ESS`s ability to deliver high bandwidth potential to the logical volumes needing it, when they need it. Older technologies are limited by the bandwidth of a single disk drive or a single ESCON channel, but FICON, working together with other ESS functions, provides a high-speed pipe with multiplexed operation.​

New with the Model 800 is an increase in the number of logical paths supported per control unit image (to 256 paths). This increases the number of logical paths for FICON to 4,096 per ESS.​

FICON offers increased per channel bandwidth that supports significant improvements in single stream sequential throughput (as much as two times as compared to ESCON). As a result, elapsed time for batch, data mining, or dump operations can be substantially improved, providing potential relief for customers whose batch or file maintenance windows are constrained.​

The data rate offered by FICON during the data transfer portion of response time is more than five times faster than ESCON, which can help improve response time and reduce connect time. The larger the transfer, the greater the typical reduction as a percentage of the total I/O response time. FICON can also help reduce or eliminate the pend time caused by busy director ports. For customers whose ESCON directors are experiencing as much as 45-50% busy conditions, this can provide significant response time reduction.​

Whereas ESCON channels can process only one operation at a time, FICON channels can process multiple concurrent data transfers. When used with Parallel Access Volumes (PAV), FICON and PAV work together to allow multiple data transfers to the same volume at the same time over the same channel, providing greater parallelism and bandwidth that can better exploit the maximum bandwidth of the ESS arrays. Another performance advantage offered by FICON is that the ESS accepts multiple channel command words (CCWs) concurrently without waiting for completion of the previous CCW.​

Finally, significant performance advantages can be realized by those customers who access their disk systems remotely. FICON helps reduce or eliminate data rate "droop" for distances up to 100 km for both read and write operations by using enhanced data buffering and pacing schemes.​

FICON also provides greater simplicity for the channel fabric. With support for more devices per channel (up to 16,384) and more logical subsystems per channel to match the increased channel bandwidth, ESCON channels can be consolidated into FICON channels at the rate of approximately 4:1. A well-configured ESS will typically need no more than eight FICON channel interfaces in order to exploit the bandwidth.

Model Summary Matrix​

Model​

800

Physical Capacity​

582 GB to 

55.9 TB

9.1 GB Drives​

No

18.2 GB Drives​

Yes

36.4 GB Drives​

Yes

72.8 GB Drives​

Yes

145.6 GB Drives​

Yes

Cache​

8 GB to 

64 GB

Power​

Three-Phase

DV Trading A/S

CVR: 14918582

Adr.: Gedevasevej 37
DK-3520 Farum, Denmark