General description
The next generation of Power Systems servers with POWER9 technology is built with innovations that can help deliver security and reliability for the data-intensive workloads of today’s enterprises. POWER9 technology is designed from the ground up for data-intensive workloads, such as databases and analytics.
The Power Systems L922 server supports two processor sockets that offer 8-core or 16-core typical 3.4 - 3.9 GHz (maximum), 10-core or 20-core typical 2.9 - 3.8 GHz (maximum), or 24-core typical 2.7 - 3.8 GHz (maximum) POWER9 cores in a 19-inch rack-mount, 2U (EIA units) drawer configuration. All the cores are active.
The server supports a maximum of 32 DDR4 DIMM slots. The memory features that are supported are 8 GB, 16 GB, 32 GB, 64 GB, and 128 GB, and run at different speeds of 2133, 2400, and 2666 Mbps, offering a maximum system memory of 4096 GB.
1.1 Power L922 system overview
The Power L922 (9008-22L) server is a powerful one- or two-socket server that includes up to 24 activated cores. If only one socket is populated at the time of ordering, the second can be populated later. It has the I/O configuration flexibility to meet today’s growth and tomorrow’s processing needs. This server supports two processor sockets, offering 8-core, 10-core, or 12-core processors running 2.7 - 3.9 GHz in a 19-inch rack-mount, 2U (EIA units) drawer configuration. All the cores are active.
The Power L922 server supports a maximum of 32 DDR4 Registered DIMM (RDIMM) slots. If only one processor socket is populated, then only 16 RDIMMs can be used. The memory features that are supported are 16 GB, 32 GB, 64 GB, and 128 GB, allowing for a maximum system memory of 2 TB if one socket is populated and 4 TB with both sockets populated.
Two different features are available for the storage backplane:
EL66: Eight SFF-3s with optional split card EL68
EC59: Optional PCIe3 Non-Volatile Memory express (NVMe) carrier card with two M.2 module slots
Each of these backplane options uses leading-edge, integrated SAS RAID controller technology that is designed and patented by IBM.
The NVMe option offers fast start times and is ideally suited to the location of rootvg of Virtual I/O Server (VIOS) partitions.
Figure 1-1 shows the Power L922 server.
Figure 1-1 The Power L922 server
 
Note: The server has no internal DVD option, although an external USB DVD drive is available with feature code (FC) EUA5. Customers are encouraged to use USB flash drives to install operating systems and VIOS whenever possible as because they are much faster than DVDs.
1.1.1 The operator panel
The operator panel is formed of two parts. All of the servers have the first part, which provides the power switch and LEDs, and are shown in Figure 1-2.
Figure 1-2 Operator panel: Power switch and LEDs
The second part is an LCD panel with three buttons, and is shown in Figure 1-3.
Figure 1-3 Operator panel: LCD and switches
In the Power L922 server, the LCD panel is optional, but if a rack contains any of the
IBM POWER9 processor-based scale-out servers, one of the them must have an LCD panel.
The LCD panel can be moved (by using the correct procedure) from one server to another to allow appropriate services to be carried out.
1.2 Operating environment
Table 1-1 lists the electrical characteristics for the Power L922 server.
Table 1-1 Electrical characteristics of the Power L922 server
Electrical characteristics
Properties
Power L922 server
Operating voltage
1400 W power supply: 200 - 240 V AC
Operating frequency
47/63 Hz
Thermal output
6,416 Btu/hour (maximum)
Power consumption
1880 watts (maximum)
Power-source loading
1.94 kVa (maximum configuration)
Phase
Single
 
Note: The maximum measured value is the worst case power consumption that is expected from a fully populated server under an intensive workload. The maximum measured value also accounts for component tolerance and non-ideal operating conditions. Power consumption and heat load vary greatly by server configuration and utilization. The IBM Systems Energy Estimator should be used to obtain a heat output estimate based on a specific configuration.
Table 1-2 lists the environment requirements for the Power L922 server.
Table 1-2 Environment requirements for Power L922 server
Environment
Recommended operating
Allowable operating
Non-operating
Temperature
18 - 27°C (64.4 - 80.6°F)
5 - 40°C (41 - 104°F)
5 - 45°C (41 - 113°F)
Humidity range
5.5°C (42°F) dew point (DP) to 60% relative humidity (RH) or 15°C (59°F) dew point
8% - 85% RH
8% - 80% RH
Maximum dew point
N/A
24°C (75°F)
27°C (80°F)
Maximum operating altitude
N/A
3050 m (10000 ft)
N/A
Table 1-3 lists the noise emissions for the Power L922 server.
Table 1-3 Noise emissions for the Power L922 server
Product
Declared A-weighted sound power level, LWAd (B)
Declared A-weighted sound pressure level, LpAm (dB)
Operating
Idle
Operating
Idle
Power L922 server
7.8
6.9
61
53
 
Tip:
Declared level LWad is the upper-limit A-weighted sound power level. Declared level LpAm is the mean A-weighted emission sound pressure level that is measured at the 1-meter bystander positions.
All measurements are made in conformance with ISO 7779 and declared in conformance with ISO 9296.
10 dB (decibel) equals 1 B (bel).
1.3 Physical package
Table 1-4 shows the physical dimensions of the Power L922 chassis. The server is available in a rack-mounted form factor and takes 2U (2 EIA units) of rack space.
Table 1-4 Physical dimensions of the rack-mounted Power L922 chassis
Dimension
Power L922 (9008-22L) server
Width
482 mm (18.97 in.)
Depth
766.5 mm (30.2 in.)
Height
86.7 mm (3.4 in.)
Weight
30.4 kg (67 lb)
Figure 1-4 show the front view of the Power L922 server.
Figure 1-4 Front view of the Power L922 server
1.4 Server features
The Power L922 system chassis contains up to two processor modules. Each of the POWER9 processor chips in the server has a 64-bit architecture. All the cores are active.
1.4.1 Power L922 server features
This summary describes the standard features of the Power L922 server:
POWER9 processor modules:
 – 8-core, typical 3.4 - 3.9 GHz (maximum) POWER9 processor.
 – 10-core, typical 2.9 - 3.8 GHz (maximum) POWER9 processor.
 – 12-core, typical 2.7 - 3.8 GHz (maximum) POWER9 processor.
High-performance Mbps DDR4 error-correcting code (ECC) memory:
 – 8 GB, 16 GB, 32 GB, 64 GB, or 128 GB memory features with different sizes/configurations, which run at different frequencies of 2133, 2400, and 2666 Mbps.
 – Up to 4 TB of DDR4 memory with two POWER processors.
 – Up to 2 TB of DDR4 memory with one POWER processor.
Storage feature: Eight small form factor (SFF) bays, one integrated SAS controller without cache, and JBOD RAID 0, 5, 6, or 10.
Optionally, split the above SFF-3 bays and add a second integrated SAS controller without cache:
 – Expanded Function Storage Backplane 8 SFF-3 Bays/Single IOA with Write Cache.
 – Optionally, attach an EXP12SX/EXP24SX SAS HDD/solid-state drive (SSD) Expansion Drawer to the single IOA.
Up to two PCIe3 NVMe carrier cards with two M.2 module slots (with up to four Mainstream 400 GB SSD NVMe M.2 modules). One PCIe3 NVMe carrier card can be ordered only with a storage backplane. If a PCIe3 NVMe carrier card is ordered with a storage backplane, then the optional split feature is not supported.
Peripheral Component Interconnect Express (PCIe) slots with a single processor:
 – One x16 Gen4 low-profile, half-length (Coherent Accelerator Processor Interface (CAPI)).
 – One x8 Gen4 low-profile, half-length (with x16 connector) (CAPI).
 – Two x8 Gen3 low-profile, half-length (with x16 connectors).
 – Two x8 Gen3 low-profile, half-length. (One of these slots is used for the required base LAN adapter.)
PCIe slots with two processors:
 – Three x16 Gen4 low-profile, half-length (CAPI).
 – Two x8 Gen4 low-profile, half-length (with x16 connectors) (CAPI).
 – Two x8 Gen3 low-profile, half-length (with x16 connectors).
 – Two x8 Gen3 low-profile, half-length. (One of these slots is used for the required base LAN adapter.)
Integrated:
 – Service processor.
 – EnergyScale technology.
 – Hot-plug and redundant cooling.
 – Two front USB 3.0 ports.
 – Two rear USB 3.0 ports.
 – Two Hardware Management Console (HMC) 1 GbE RJ45 ports.
 – One system port with an RJ45 connector.
 – Two hot-plug, redundant power supplies.
 – 19-inch rack-mounting hardware (2U).
1.4.2 Minimum features
The minimum Power L922 initial order must include a processor module, two 8 GB DIMMs, two power supplies and power cords, an operating system indicator, a cover set indicator, and a Language Group Specify. Also, it must include one of the storage options and one of the network options below:
Storage options:
 – For boot from NVMe: One NVMe carrier and one NVMe M.2 module.
 – For boot from local SFF-3 HDD/SDD: One storage backplane and one SFF-3 HDD or SDD.
 – For boot from SAN: Internal HDD or SSD and RAID cards are not required if #0837 (Boot from SAN) is selected. A Fibre Channel adapter must be ordered if #0837 is selected.
Network options:
 – One PCIe2 4-port 1 Gb Ethernet adapter.
 – One of the supported 10 Gb Ethernet adapters.
1.4.3 Power supply features
The Power L922 server supports two 1400 watts 200 - 240-Volt power supplies (#EL1B). Two power supplies are always installed. One power supply is required for normal system operation; the second is for redundancy.
1.5 Power L922 processor modules
A maximum of two processors with eight processor cores (#ELPV), two processors with 10 processor cores (#ELPW), or two processors with 12 cores (#ELPX) is allowed. All processor cores must be activated. The following list defines the allowed quantities of processor activation entitlements:
One 8-core, typical 3.4 - 3.9 GHz (maximum) processor (#ELPV) requires that eight processor activation codes be ordered. A maximum of eight processor activations (#ELAV) is allowed.
Two 8-core, typical 3.4 - 3.9 GHz (maximum) processors (#ELPV) require that 16 processor activation codes be ordered. A maximum of 16 processor activations (#ELAV) is allowed.
One 10-core, typical 2.9 - 3.8 GHz (maximum) processor (#ELPW) requires that 10 processor activation codes be ordered. A maximum of 10 processor activation code features (#ELAW) is allowed.
Two 10-core, typical 2.9 - 3.8 GHz (maximum) processors (#ELPW) require that 20 processor activation codes be ordered. A maximum of 20 processor activation code features (#ELAW) is allowed.
Two 12-core, typical 2.7 - 3.8 GHz (maximum) processors (#ELPX) require that 24 processor activation codes be ordered. A maximum of 24 processor activation code features (#ELAX) is allowed.
Table 1-5 summarizes the processor features that are available for the Power L922 server.
Table 1-5 Processor features for the Power L922 server
Feature code
Processor module description
ELPV
8-core Typical 3.4 - 3.9 GHz (maximum) POWER9 processor
ELPW
10-core Typical 2.9 - 3.8 GHz (maximum) POWER9 processor
ELPX
12-core Typical 2.7 - 3.8 GHz (maximum) POWER9 processor
1.5.1 Memory features
A minimum of 32 GB of memory is required on the Power L922 server. Memory upgrades require memory pairs. The base memory is two 8 GB DDR4 memory modules (#EM60).
Table 1-6 lists the memory features that are available for the Power L922 server.
Table 1-6 Summary of memory features for the Power L922 server
Feature code
DIMM capacity
Minimum quantity
Maximum quantity
EM60
8 GB
0
32
EM62
16 GB
0
32
EM63
32 GB
0
32
EM64
64 GB
0
32
EM65
128 GB
0
32
 
Note: Different sizes/configurations run at different frequencies of 2133, 2400, and
2666 Mbps.
1.5.2 PCIe slots
The Power L922 server has up to nine PCIe hot-plug slots, providing excellent configuration flexibility and expandability.
The following section describes the available PCIe slots of the Power L922 server:
With two POWER9 processor single-chip modules (SCMs), nine PCIe slots are available: Three are x16 Gen4 low-profile, half-length slots (CAPI), two are x8 Gen4 low-profile, half-length slots (with x16 connectors) (CAPI), two are x8 Gen3 low-profile, half-length slots (with x16 connectors), and two are x8 Gen3 low-profile, half-length slots (one of these slots is used for the required base LAN adapter).
With one POWER9 processor SCM, six PCIe slots are available: One is x16 Gen4 low-profile, half-length slots (CAPI), one is x8 Gen4 low-profile, half-length slots (with x16 connector) (CAPI), two are x8 Gen3 low-profile, half-length slots (with x16 connectors), and two are x8 Gen3 low-profile, half-length slots (one of these slots is used for the required base LAN adapter).
The x16 slots can provide up to twice the bandwidth of x8 slots because they offer twice as many PCIe lanes. PCIe Gen4 slots can support up to twice the bandwidth of a PCIe3 slot, and PCIe3 slots can support up to twice the bandwidth of a PCIe Gen2 slot, assuming an equivalent number of PCIe lanes.
At least one PCIe Ethernet adapter is required on the server by IBM to ensure proper manufacture, test, and support of the server. One of the x8 PCIe slots is used for this required adapter.
The Power L922 server is smarter about energy efficiency when cooling the PCIe adapter environment. They sense which IBM PCIe adapters are installed in their PCIe slots and, if an adapter requires higher levels of cooling, they automatically speed up fans to increase airflow across the PCIe adapters. Faster fans increase the sound level of the server.
1.6 Disk and media features
Three backplane options are available for the Power L922 servers:
Storage Backplane 8 SFF-3 Bays (#EL66)
4 + 4 SFF-3 Bays split backplane (#EL68)
Expanded Function Storage Backplane 8 SFF-3 Bays/Single IOA with Write Cache
(#EL67)
#EL66 provides eight SFF-3 bays and one SAS controller with zero write cache.
By optionally adding #EL68, a second integrated SAS controller with no write cache is provided, and the eight SSF-3 bays are logically divided into two sets of four bays. Each SAS controller independently runs one of the four-bay sets of drives.
The backplane options provide SFF-3 SAS bays in the system unit. These 2.5-inch or SFF SAS bays can contain SAS drives (HDD or SSD) mounted on a Gen3 tray or carrier. Thus, the drives are designated SFF-3. SFF-1 or SFF-2 drives do not fit in an SFF-3 bay. All SFF-3 bays support concurrent maintenance or hot-plug capability.
These backplane options use leading-edge, integrated SAS RAID controller technology that is designed and patented by IBM. A custom-designed IBM PowerPC® based ASIC chip is the basis of these SAS RAID controllers and provides RAID 5 and RAID 6 performance levels, especially for SSD. Internally, SAS ports are implemented and provide plenty of bandwidth. The integrated SAS controllers are placed in dedicated slots and do not reduce the number of available PCIe slots.
This backplane option supports HDDs or SSDs or a mixture of HDDs and SSDs in the SFF-3 bays. Mixing HDDs and SSDs applies even within a single set of six bays of the split backplane option.
 
Note: If mixing HDDs and SSDs, they must be in separate arrays (unless you use the
IBM Easy Tier® function).
This backplane option can offer different drive protection options: RAID 0, RAID 5, RAID 6, or RAID 10. RAID 5 requires a minimum of three drives of the same capacity. RAID 6 requires a minimum of four drives of the same capacity. RAID 10 requires a minimum of two drives. Hot-spare capability is supported by RAID 5, RAID 6, or RAID 10.
This backplane option is supported by Linux and VIOS. It is highly recommended but not required that the drives be protected.
Unlike the hot-plug PCIe slots and SAS bays, concurrent maintenance is not available for the integrated SAS controllers. Scheduled downtime is required if a service action is required for these integrated resources.
In addition to supporting HDDs and SSDs in the SFF-3 SAS bays, the Expanded Function Storage Backplane (#EJ1G) supports the optional attachment of an EXP12SX/EXP24SX drawer. All bays are accessed by both of the integrated SAS controllers. The bays support concurrent maintenance (hot plug).
Table 1-7 shows the available disk drive FCs that can be installed in the Power L922 server.
 
Note: The disk drives are also supported by the virtual I/O server (VIOS).
Table 1-7 Disk drive feature code description for the Power L922 server
Feature code
CCIN
Description
Maximum
OS support
ESRM
5B43
300 GB 15K RPM SAS SFF-2 4 KB Block Cached Disk Drive
672
Linux
EL1P
19B1
300 GB 15K RPM SAS SFF-2 Disk Drive
672
Linux
ESRL
5B41
300 GB 15K RPM SAS SFF-3 4 KB Block Cached Disk Drive
8
Linux
ELDB
59E0
300 GB 15K RPM SAS SFF-3 Disk Drive
8
Linux
ES94
5B10
387 GB Enterprise SAS 4k SFF-2 SSD for AIX/Linux
336
Linux
ES90
5B13
387 GB Enterprise SAS 4k SFF-3 SSD for AIX/Linux
8
Linux
ESGV
5B16
387 GB Enterprise SAS 5xx SFF-2 SSD for AIX/Linux
336
Linux
ESGT
5B19
387 GB Enterprise SAS 5xx SFF-3 SSD for AIX/Linux
8
Linux
ESB0
5B19
387 GB Enterprise SAS 5xx SFF-3 SSD for AIX/Linux
8
Linux
ESB2
5B16
387 GB Enterprise SAS 5xx SFF-2 SSD for AIX/Linux
336
Linux
ESB8
5B13
387 GB Enterprise SAS 4k SFF-3 SSD for AIX/Linux
8
Linux
ESBA
5B10
387 GB Enterprise SAS 4k SFF-2 SSD for AIX/Linux
336
Linux
ES14
N/A
400 GB Mainstream SSD NVMe M.2 module
2
Linux
EL1Q
19B3
600 GB 10K RPM SAS SFF-2 Disk Drive
672
Linux
ELEV
59D2
600 GB 10K RPM SAS SFF-2 Disk Drive 4 KB Block
672
Linux
ELD5
59D0
600 GB 10K RPM SAS SFF-3 Disk Drive
8
Linux
ELF5
59D3
600 GB 10K RPM SAS SFF-3 Disk Drive 4 KB Block
8
Linux
ESRR
5B47
600 GB 15K RPM SAS SFF-2 4 KB Block Cached Disk Drive
672
Linux
ELDP
59CF
600 GB 15K RPM SAS SFF-2 Disk Drive - 5xx Block
672
Linux
ESRP
5B45
600 GB 15K RPM SAS SFF-3 4 KB Block Cached Disk Drive
8
Linux
ESNA
5B11
775 GB Enterprise SAS 4k SFF-2 SSD for AIX/Linux
336
Linux
ESNC
5B14
775 GB Enterprise SAS 4k SFF-3 SSD for AIX/Linux
8
Linux
ESGZ
5B17
775 GB Enterprise SAS 5xx SFF-2 SSD for AIX/Linux
336
Linux
ESGX
5B1A
775 GB Enterprise SAS 5xx SFF-3 SSD for AIX/Linux
8
Linux
ESB4
5B1A
775 GB Enterprise SAS 5xx SFF-3 SSD for AIX/Linux
8
Linux
ESB6
5B17
775 GB Enterprise SAS 5xx SFF-2 SSD for AIX/Linux
336
Linux
ESBE
5B14
775 GB Enterprise SAS 4k SFF-3 SSD for AIX/Linux
8
Linux
ESBG
5B11
775 GB Enterprise SAS 4k SFF-2 SSD for AIX/Linux
336
Linux
ESJ0
5B29
931 GB Mainstream SAS 4k SFF-2 SSD for AIX/Linux
336
Linux
ESJ8
5B2B
931 GB Mainstream SAS 4k SFF-3 SSD for AIX/Linux
8
Linux
ELD3
59CD
1.2 TB 10K RPM SAS SFF-2 Disk Drive (Linux)
672
Linux
ELF3
59DA
1.2 TB 10K RPM SAS SFF-2 Disk Drive 4 KB Block
672
Linux
ELF9
59DB
1.2 TB 10K RPM SAS SFF-3 Disk Drive 4 KB Block
8
Linux
ESNE
5B12
1.55 TB Enterprise SAS 4k SFF-2 SSD for AIX/Linux
336
Linux
ESNG
5B15
1.55 TB Enterprise SAS 4k SFF-3 SSD for AIX/Linux
8
Linux
ESBJ
5B15
1.55 TB Enterprise SAS 4k SFF-3 SSD for AIX/Linux
8
Linux
ESBL
5B12
1.55 TB Enterprise SAS 4k SFF-2 SSD for AIX/Linux
336
Linux
ELFT
59DD
1.8 TB 10K RPM SAS SFF-2 Disk Drive 4 KB Block
672
Linux
ELFV
59DE
1.8 TB 10K RPM SAS SFF-3 Disk Drive 4 KB Block
8
Linux
ESJ2
5B21
1.86 TB Mainstream SAS 4k SFF-2 SSD for AIX/Linux
336
Linux
ESJA
5B20
1.86 TB Mainstream SAS 4k SFF-3 SSD for AIX/Linux
8
Linux
ESJ4
5B2D
3.72 TB Mainstream SAS 4k SFF-2 SSD for AIX/Linux
336
Linux
ESJC
5B2C
3.72 TB Mainstream SAS 4k SFF-3 SSD for AIX/Linux
8
Linux
EL62
5B1D
3.82 - 4.0 TB 7200 RPM 4 KB SAS LFF-1 Nearline Disk Drive
336
Linux
ESJ6
5B2F
7.45 TB Mainstream SAS 4k SFF-2 SSD for AIX/Linux
336
Linux
ESJE
5B2E
7.45 TB Mainstream SAS 4k SFF-3 SSD for AIX/Linux
8
Linux
EL64
5B1F
7.72 - 8.0 TB 7200 RPM 4 KB SAS LFF-1 Nearline Disk Drive
336
Linux
ELQP
19B1
Quantity 150 of EL1P
4
Linux
ELQQ
19B3
Quantity 150 of EL1Q
4
Linux
ELR2
5B1D
Quantity 150 of EL62 3.86 - 4.0 TB 7200 RPM 4 KB LFF-1 Disk
2
Linux
ELR4
5B1F
Quantity 150 of EL64 7.72 - 8.0 TB 7200 RPM 4 KB LFF-1 Disk
2
Linux
ELQ3
59CD
Quantity 150 of ELD3 (1.2 TB 10k SFF-2)
4
Linux
ELQ0
59CF
Quantity 150 of ELDP 600 GB 15k RPM SFF-2 Disk
4
Linux
ELQV
59D2
Quantity 150 of ELEV
4
Linux
ELQ2
59DA
Quantity 150 of ELF3
4
Linux
ELQT
59DD
Quantity 150 of ELFT
4
Linux
ER94
5B10
Quantity 150 of ES94 387 GB SAS 4k
2
Linux
ERGV
5B16
Quantity 150 of ESGV 387 GB SSD 4k
2
Linux
ERGZ
5B17
Quantity 150 of ESGZ 775 GB SSD 4k
2
Linux
ERJ0
5B29
Quantity 150 of ESJ0 931 GB SAS 4k
2
Linux
ERJ2
5B21
Quantity 150 of ESJ2 1.86 TB SAS 4k
2
Linux
ERJ4
5B2D
Quantity 150 of ESJ4 3.72 TB SAS 4k
2
Linux
ERJ6
5B2F
Quantity 150 of ESJ6 7.45 TB SAS 4k
2
Linux
ERNA
5B11
Quantity 150 of ESNA 775 GB SSD 4k
2
Linux
ERNE
5B12
Quantity 150 of ESNE 1.55 TB SSD 4k
2
Linux
ESVM
5B43
Quantity 150 of ESRM
4
Linux
ESVR
5B47
Quantity 150 of ESRR
4
Linux
ESQ2
5B16
Quantity 150 of ESB2 387 GB SAS 4k
2
Linux
ESQ6
5B17
Quantity 150 of ESB6 775 GB SAS 4k
2
Linux
ESQA
5B10
Quantity 150 of ESBA 387 GB SAS 4k
2
Linux
ESQG
5B11
Quantity 150 of ESBG 775 GB SAS 4k
2
Linux
ESQL
5B12
Quantity 150 of ESBL 1.55 TB SAS 4k
2
Linux
The RDX docking station EUA4 accommodates RDX removable disk cartridges of any capacity. The disk is in a protective rugged cartridge enclosure that plugs into the docking station. The docking station holds one removable rugged disk drive/cartridge at a time. The rugged removable disk cartridge and docking station perform saves, restores, and backups similar to a tape drive. This docking station can be an excellent entry capacity/performance option.
The stand-alone USB DVD drive (EUA5) is an optional, stand-alone external USB-DVD device. It requires high current at 5 V and must use the front USB 3.0 port.
1.7 I/O drawers for the Power L922 server
If more Gen3 PCIe slots beyond the system node slots are required, PCIe3 I/O drawers can be attached to the Power L922 server.
EXP24SX /EXP12SX SAS Storage Enclosures (ELLS or ELLL) are also supported and provide storage capacity.
The 7226-1U3 model offers a 1U rack-mountable dual bay enclosure with storage device options of LTO5, 6, 7, and 8 tape drives with both SAS and Fibre Channel interfaces. The 7226 model also offers DVD-RAM SAS and USB drive features, and RDX 500 GB, 1 TB, and 2 TB drive options. Up to two drives (or four DVD-RAM drives) can be installed in any combination in the 7226 enclosure.
1.7.1 PCIe3 I/O expansion drawer
This 19-inch, 4U (4 EIA) enclosure provides PCIe3 slots outside of the system unit. It has two module bays. One 6-slot fan-out module (EMXH or ELMG) can be placed in each module bay. Two 6-slot modules provide a total of 12 PCIe3 slots. Each fan-out module is connected to a PCIe3 Optical Cable adapter in the system unit over an active optical CXP cable (AOC) pair or CXP copper cable pair.
The PCIe3 I/O Expansion Drawer has two redundant, hot-plug power supplies. Each power supply has its own separately ordered power cord. The two power cords plug into a power supply conduit that connects to the power supply. The single-phase AC power supply is rated at 1030 watts and can use 100 - 120 V or 200 - 240 V. If using 100 - 120 V, then the maximum is 950 watts. As a preferred practice, connect the power supply to a power distribution unit (PDU) in the rack. Power Systems PDUs are designed for a 200 - 240 V electrical source.
A blind swap cassette (BSC) is used to house the full-high adapters that go into these slots. The BSC is the same BSC that was used with the previous generation server's 12X attached I/O drawers (#5802, #5803, #5877, and #5873). The drawer includes a full set of BSCs, even if the BSCs are empty.
Concurrent repair and add/removal of PCIe adapters is done by HMC guided menus or by operating system support utilities.
Figure 1-5 shows a PCIe3 I/O expansion drawer.
Figure 1-5 PCIe3 I/O expansion drawer
1.7.2 I/O drawers and usable PCI slot
Figure 1-6 shows the rear view of the PCIe3 I/O expansion drawer that is equipped with two PCIe3 6-slot fan-out modules with the location codes for the PCIe adapter slots.
Figure 1-6 Rear view of a PCIe3 I/O expansion drawer with PCIe slots location codes
Table 1-8 provides details about the PCI slots in the PCIe3 I/O expansion drawer that is equipped with two PCIe3 6-slot fan-out modules.
Table 1-8 PCIe slot locations for the PCIe3 I/O expansion drawer with two fan-out modules
Slot
Location code
Description
Slot 1
P1-C1
PCIe3, x16
Slot 2
P1-C2
PCIe3, x8
Slot 3
P1-C3
PCIe3, x8
Slot 4
P1-C4
PCIe3, x16
Slot 5
P1-C5
PCIe3, x8
Slot 6
P1-C6
PCIe3, x8
Slot 7
P2-C1
PCIe3, x16
Slot 8
P2-C2
PCIe3, x8
Slot 9
P2-C3
PCIe3, x8
Slot 10
P2-C4
PCIe3, x16
Slot 11
P2-C5
PCIe3, x8
Slot 12
P2-C6
PCIe3, x8
All slots support full-length, regular-height adapters or short (low-profile) with a regular-height tailstock in single-wide, Gen3 BSCs.
Slots C1 and C4 in each PCIe3 6-slot fan-out module are x16 PCIe3 buses and slots C2, C3®, C5, and C6 are x8 PCIe buses.
All slots support enhanced error handling (EEH).
All PCIe slots are hot swappable and support concurrent maintenance.
Table 1-9 summarizes the maximum number of I/O drawers that are supported and the total number of PCI slots that are available.
Table 1-9 Maximum number of I/O drawers that are supported and total number of PCI slots
System
Maximum number of I/O Exp Drawers
Maximum number of I/O fan-out modules
Maximum PCIe slots
Power L922 server (1-socket)
1
1
11
Power L922 server (2-socket)
2
3
24
1.7.3 EXP24SX SAS Storage Enclosure (#ELLS) and EXP12SX SAS Storage Enclosure (#ELLL)
If you need more disks than are available with the internal disk bays, you can attach more external disk subsystems, such as EXP24SX SAS Storage Enclosure (#ELLS) or EXP12SX SAS Storage Enclosure (#ELLL).
The EXP24SX is a storage expansion enclosure with twenty-four 2.5-inch SFF SAS bays. It supports up to 24 hot-plug HDDs or SSDs in only 2 EIA of space in a 19-inch rack. The EXP24SX SFF bays use SFF Gen2 (SFF-2) carriers or trays.
The EXP12SX is a storage expansion enclosure with twelve 3.5-inch large form factor (LFF) SAS bays. It supports up to 12 hot-plug HDDs in only 2 EIA of space in a 19-inch rack. The EXP12SX SFF bays use LFF Gen1 (LFF-1) carriers/trays. The 4 KB sector drives (#4096 or #4224) are supported. SSDs are not supported.
With Linux/VIOS, the EXP24SX or the EXP12SX can be ordered with four sets of six bays (mode 4), two sets of 12 bays (mode 2), or one set of 24 bays (mode 1). It is possible to change the mode setting in the field by using software commands along with a documented procedure.
 
Important: When changing modes, a skilled, technically qualified person should follow the special documented procedures. Improperly changing modes can potentially destroy existing RAID sets, prevent access to existing data, or allow other partitions to access another partition's existing data.
Four mini-SAS HD ports on the EXP24SX or EXP12SX are attached to PCIe3 SAS adapters or attached to an integrated SAS controller in the Power L922 server.
The attachment between the EXP24SX or EXP12SX and the PCIe3 SAS adapters or integrated SAS controllers is through SAS YO12 or X12 cables. All ends of the YO12 and X12 cables have mini-SAS HD narrow connectors.
The EXP24SX or EXP12SX includes redundant AC power supplies and two power cords.
Figure 1-7 shows the EXP24SX drawer.
Figure 1-7 The EXP24SX drawer
Figure 1-8 shows the EXP12SX drawer.
Figure 1-8 The EXP12SX drawer
1.8 System racks
The Power L922 server is designed to mount in the 36U 7014-T00 (#0551), the 42U 7014-T42 (#0553), or the IBM 42U Slim Rack (7965-94Y) rack. These racks are built to the 19-inch EIA 310D standard.
 
Order information: The racking approach for the initial order must be either a 7014-T00, 7014-T42, or 7965-94Y model. If an extra rack is required for I/O expansion drawers as a manufacturing execution system (MES) to an existing system, either an #0551, #0553, or #ER05 rack must be ordered.
You must leave 2U of space at either the bottom or top of the rack, depending on the client's cabling preferences, to allow for cabling to exit the rack.
If a system will be installed in a rack or cabinet that is not an IBM rack, ensure that the rack meets the requirements that are described in 1.8.10, “Original equipment manufacturer rack” on page 27.
 
Responsibility: The client is responsible for ensuring that the installation of the drawer in the preferred rack or cabinet results in a configuration that is stable, serviceable, safe, and compatible with the drawer requirements for power, cooling, cable management, weight, and rail security.
1.8.1 IBM 7014 Model T00 rack
The 1.8-meter (71-inch) model T00 is compatible with past and present IBM Power Systems servers. The features of the T00 rack are as follows:
Has 36U (EIA units) of usable space.
Has optional removable side panels.
Has optional side-to-side mounting hardware for joining multiple racks.
Has increased power distribution and weight capacity.
Supports both AC and DC configurations.
Up to four PDUs can be mounted in the PDU bays, but other PDUs can fit inside the rack. For more information, see 1.8.7, “The AC power distribution unit and rack content” on page 21.
For the T00 rack, three door options are available:
 – Front Door for 1.8 m Rack (#6068).
This feature provides an attractive black full height rack door. The door is steel, with a perforated flat front surface. The perforation pattern extends from the bottom to the top of the door to enhance ventilation and provide some visibility into the rack.
 – A 1.8 m Rack Acoustic Door (#6248).
This feature provides a front and rear rack door that is designed to reduce acoustic sound levels in a general business environment.
 – A 1.8 m Rack Trim Kit (#6263).
If no front door is used in the rack, this feature provides a decorative trim kit for the front.
Ruggedized Rack Feature.
For enhanced rigidity and stability of the rack, the optional Ruggedized Rack Feature
(#6080) provides extra hardware that reinforces the rack and anchors it to the floor. This hardware is designed primarily for use in locations where earthquakes are a concern. The feature includes a large steel brace or truss that bolts into the rear of the rack.
It is hinged on the left side so it can swing out of the way for easy access to the rack drawers when necessary. The Ruggedized Rack Feature also includes hardware for bolting the rack to a concrete floor or a similar surface, and bolt-in steel filler panels for any unoccupied spaces in the rack.
Weights are as follows:
 – T00 base empty rack: 244 kg (535 lb).
 – T00 full rack: 816 kg (1795 lb).
 – Maximum weight of drawers is 572 kg (1260 lb).
 – Maximum weight of drawers in a zone 4 earthquake environment is 490 kg (1080 lb). This number equates to 13.6 kg (30 lb) per EIA.
 
Important: If more weight is added to the top of the rack, for example, adding
#6117, the 490 kg (1080 lb) must be reduced by the weight of the addition. As an example, #6117 weighs approximately 45 kg (100 lb), so the new maximum weight of drawers that the rack can support in a zone 4 earthquake environment is 445 kg (980 lb). In the zone 4 earthquake environment, the rack must be configured starting with the heavier drawers at the bottom of the rack.
1.8.2 IBM 7014 Model T42 rack
The 2.0-meter (79.3-inch) Model T42 rack addresses the client requirement for a tall enclosure to house the maximum amount of equipment in the smallest possible floor space. The following features are for the model T42 rack (which differ from the model T00):
The T42 rack has 42U (EIA units) of usable space (6U of extra space).
The model T42 supports AC power only.
Weights are as follows:
 – T42 base empty rack: 261 kg (575 lb)
 – T42 full rack: 930 kg (2045 lb)
The available door options for the Model T42 rack are shown in Figure 1-9.
Figure 1-9 Door options for the T42 rack
The 2.0 m Rack Trim Kit (#6272) is used if no front door is used in the rack.
The Front Door for a 2.0 m Rack (#6069) is made of steel, with a perforated flat front surface. The perforation pattern extends from the bottom to the top of the door to enhance ventilation and provide some visibility into the rack. This door is non-acoustic and has a depth of about 25 mm (1 in.).
The 2.0 m Rack Acoustic Door (#6249) consists of a front and rear door to reduce noise by approximately 6 dB(A). It has a depth of approximately 191 mm (7.5 in.).
The High-End Appearance Front Door (#6250) provides a front rack door with a field-installed Power 780 logo indicating that the rack contains a Power 780 system. The door is not acoustic and has a depth of about 90 mm (3.5 in.).
 
High end: For the High-End Appearance Front Door (#6250), use the High-End Appearance Side Covers (#6238) to make the rack appear as though it is a high-end server (but in a 19-inch rack format instead of a 24-inch rack).
The #ERG7 provides an attractive black full height rack door. The door is steel, with a perforated flat front surface. The perforation pattern extends from the bottom to the top of the door to enhance ventilation and provide some visibility into the rack. The non-acoustic door has a depth of about 134 mm (5.3 in.).
Rear Door Heat Exchanger
To lead away more heat, a special door that is named the Rear Door Heat Exchanger (RDHX) (#6858) is available. This door replaces the standard rear door on the rack. Copper tubes that are attached to the rear door circulate chilled water, which is provided by the customer. The chilled water removes heat from the exhaust air being blown through the servers and attachments that are mounted in the rack. With industry standard quick couplings, the water lines in the door attach to the customer-supplied secondary water loop.
For more information about planning for the installation of the IBM RDHX, see the IBM Knowledge Center at:
1.8.3 IBM 42U Slim Rack 7965-94Y
The 2.0-meter (79-inch) model 7965-94Y is compatible with past and present IBM Power Systems servers and provides an excellent 19-inch rack enclosure for your data center. Its 600 mm (23.6 in.) width combined with its 1100 mm (43.3 in.) depth plus its 42 EIA enclosure capacity provides great footprint efficiency for your systems. This enclosure can be easily placed on standard 24-inch floor tiles.
The IBM 42U Slim Rack has a lockable perforated front steel door, providing ventilation, physical security, and visibility of indicator lights in the installed equipment within. In the rear, either a lockable perforated rear steel door (#EC02) or a lockable RDHX(1164-95X) is used. Lockable optional side panels (#EC03) increase the rack's aesthetics, help control airflow through the rack, and provide physical security. Multiple 42U Slim Racks can be bolted together to create a rack suite (#EC04).
Up to six optional 1U PDUs can be placed vertically in the sides of the rack. More PDUs can be placed horizontally, but they each use 1U of space in this position.
1.8.4 #0551
The 1.8-Meter Rack (#0551) is a 36 EIA unit rack. The rack that is delivered as #0551 is the same rack that is delivered when you order the 7014-T00 rack. The included features might vary. Certain features that are delivered as part of the 7014-T00 rack must be ordered separately with #0551.
1.8.5 #0553
The 2.0-Meter Rack (#0553) is a 42 EIA unit rack. The rack that is delivered as #0553 is the same rack that is delivered when you order the 7014-T42 rack. The included features might vary. Certain features that are delivered as part of the 7014-T42 rack must be ordered separately with #0553.
1.8.6 #ER05
This feature provides a 19-inch, 2.0-meter high rack with 42 EIA units of total space for installing a rack-mounted Central Electronics Complex or expansion units. The 600 mm wide rack fits within a data center's 24-inch floor tiles and provides better thermal and cable management capabilities. The following features are required on the #ER05:
Front door (#EC01)
Rear door (#EC02) or RDHX indicator (#EC05)
PDUs on the rack are optional. Each #7196 and #7189 PDU uses one of six vertical mounting bays. Each PDU beyond four uses 1U of rack space.
If ordering Power Systems equipment in an MES order, use the equivalent rack #ER05 instead of 7965-94Y so that IBM Manufacturing can include the hardware in the rack.
1.8.7 The AC power distribution unit and rack content
For rack models T00 and T42, 12-outlet PDUs are available, which include the AC PDUs
(#9188 and #7188), and the AC Intelligent PDU+ (#5889 and #7109). The Intelligent PDU+ (#5889 and #7109) is identical to #9188 and #7188 PDU, but are equipped with one Ethernet port, one console serial port, and one RS232 serial port for power monitoring.
The PDUs have 12 client-usable IEC 320-C13 outlets. There are six groups of two outlets that are fed by six circuit breakers. Each outlet is rated up to 10 amps, but each group of two outlets is fed from one 15 amp circuit breaker.
High-function PDUs provide more electrical power per PDU and offer better “PDU footprint” efficiency. In addition, they are intelligent PDUs that provide insight to actual power usage by receptacle and also provide remote power on/off capability for easier support by individual receptacle. The new PDUs can be ordered as #EPTJ, #EPTL, #EPTN, and #EPTQ.
High-function PDU FCs are shown in Table 1-10.
Table 1-10 Available high-function PDUs
PDUs
1-phase or 3-phase depending on country wiring standards
3-phase 208 V depending on country wiring standards
Nine C19 receptacles
EPTJ
EPTL
Twelve C13 receptacles
EPTN
EPTQ
In addition, the following high-function PDUs were announced in October 2019:
High Function 9xC19 PDU plus (#ECJJ):
This is an intelligent, switched 200-240 volt AC Power Distribution Unit (PDU) plus with nine C19 receptacles on the front of the PDU. The PDU is mounted on the rear of the rack making the nine C19 receptacles easily accessible. For comparison, this is most similar to the earlier generation #EPTJ PDU.
High Function 9xC19 PDU plus 3-Phase (#ECJL):
This is an intelligent, switched 208 volt 3-phase AC Power Distribution Unit (PDU) plus with nine C19 receptacles on the front of the PDU. The PDU is mounted on the rear of the rack making the nine C19 receptacles easily accessible. For comparison, this is most similar to the earlier generation #EPTL PDU.
High Function 12xC13 PDU plus (#ECJN):
This is an intelligent, switched 200-240 volt AC Power Distribution Unit (PDU) plus with twelve C13 receptacles on the front of the PDU. The PDU is mounted on the rear of the rack making the twelve C13 receptacles easily accessible. For comparison, this is most similar to the earlier generation #EPTN PDU.
High Function 12xC13 PDU plus 3-Phase (#ECJQ):
This is an intelligent, switched 208 volt 3-phase AC Power Distribution Unit (PDU) plus with twelve C13 receptacles on the front of the PDU. The PDU is mounted on the rear of the rack making the twelve C13 receptacles easily accessible. For comparison, this is most similar to the earlier generation #EPTQ PDU.
Table 1-11 on page 22 lists the feature codes for the high-function PDUs announced in October 2019.
Table 1-11 High-function PDUs available after October 2019
PDUs
1-phase or 3-phase depending on country wiring standards
3-phase 208 V depending on country wiring standards
Nine C19 receptacles
ECJJ
ECJL
Twelve C13 receptacles
ECJN
ECJQ
Four PDUs can be mounted vertically in the back of the T00 and T42 racks. Figure 1-10 shows the placement of the four vertically mounted PDUs. In the rear of the rack, two more PDUs can be installed horizontally in the T00 rack and three in the T42 rack. The four vertical mounting locations are filled first in the T00 and T42 racks. Mounting PDUs horizontally uses 1U per PDU and reduces the space available for other racked components. When mounting PDUs horizontally, the best approach is to use fillers in the EIA units that are occupied by these PDUs to facilitate proper air-flow and ventilation in the rack.
Figure 1-10 PDU placement and PDU view
The PDU receives power through a UTG0247 power-line connector. Each PDU requires one PDU-to-wall power cord. Various power cord features are available for various countries and applications by varying the PDU-to-wall power cord, which must be ordered separately. Each power cord provides the unique design characteristics for the specific power requirements. To match new power requirements and save previous investments, these power cords can be requested with an initial order of the rack or with a later upgrade of the rack features.
Table 1-12 shows the available wall power cord options for the PDU and intelligent power distribution unit (iPDU) features, which must be ordered separately.
Table 1-12 Wall power cord options for the PDU and iPDU features
Feature code
Wall plug
Rated voltage (Vac)
Phase
Rated amperage
Geography
#6653
IEC 309,
3P+N+G, 16 A
230
3
16 amps/phase
Internationally available
#6489
IEC309
3P+N+G, 32 A
230
3
32 amps/phase
EMEA
#6654
NEMA L6-30
200-208, 240
1
24 amps
US, Canada, LA, and Japan
#6655
RS 3750DP (watertight)
200-208, 240
1
24 amps
US, Canada, LA, and Japan
#6656
IEC 309,
P+N+G, 32 A
230
1
24 amps
EMEA
#6657
PDL
230-240
1
32 amps
Australia, New Zealand
#6658
Korean plug
220
1
30 amps
North and South Korea
#6492
IEC 309, 2P+G, 60 A
200-208, 240
1
48 amps
US, Canada, LA, and Japan
#6491
IEC 309, P+N+G, 63 A
230
1
63 amps
EMEA
 
Notes: Ensure that the appropriate power cord feature is configured to support the power that is being supplied. Based on the power cord that is used, the PDU can supply
4.8 - 19.2 kVA. The power of all the drawers plugged into the PDU must not exceed the power cord limitation.
The Universal PDUs are compatible with previous models.
To better enable electrical redundancy, each server has two power supplies that must be connected to separate PDUs, which are not included in the base order.
For maximum availability, a preferred approach is to connect power cords from the same system to two separate PDUs in the rack, and to connect each PDU to independent power sources.
For detailed power requirements and power cord details about the 7014 and 7965-94Yracks, see the “Planning for power” section in the IBM Knowledge Center at:
1.8.8 Rack-mounting rules
Consider the following primary rules when you mount the system into a rack:
The system is designed to be placed at any location in the rack. For rack stability, start filling a rack from the bottom.
Any remaining space in the rack can be used to install other systems or peripheral devices if the maximum permissible weight of the rack is not exceeded and the installation rules for these devices are followed.
Before placing the system into the service position, be sure to follow the rack manufacturer’s safety instructions regarding rack stability.
 
Order information: The racking approach for the initial order must be either a 7014-T00, 7014-T42, or 7965-94Y. If an extra rack is required for I/O expansion drawers as an Manufacturing Equipment Specification (MES) to an existing system, either #0551, #0553, or #ER05 must be ordered.
You must leave 2U of space at either the bottom or top of the rack, depending on the client's cabling preferences, to allow for cabling to exit the rack.
1.8.9 Useful rack additions
This section highlights several rack addition solutions for IBM Power Systems rack-based systems.
IBM System Storage 7226 Model 1U3 Multi-Media Enclosure
The IBM System Storage™ 7226 Model 1U3 Multi-Media Enclosure can accommodate up to two tape drives, two RDX removable disk drive docking stations, or up to four DVD-RAM drives.
The IBM System Storage 7226 Multi-Media Enclosure supports LTO Ultrium and DAT160 Tape technology, DVD-RAM, and RDX removable storage requirements on the following IBM systems:
IBM POWER6™ processor-based systems
IBM POWER7® processor-based systems
IBM POWER8 processor-based systems
IBM POWER9 processor-based systems
The IBM System Storage 7226 Multi-Media Enclosure offers an expansive list of drive feature options, as shown in Table 1-13.
Table 1-13 Supported drive features for the 7226-1U3 enclosure
Feature code
Description
Status
#5619
DAT160 SAS Tape Drive
Available
#EU16
DAT160 USB Tape Drive
Available
#1420
DVD-RAM SAS Optical Drive
Available
#1422
DVD-RAM Slim SAS Optical Drive
Available
#5762
DVD-RAM USB Optical Drive
Available
#5763
DVD Front USB Port Sled with DVD-RAM USB Drive
Available
#5757
DVD RAM Slim USB Optical Drive
Available
#8248
LTO Ultrium 5 Half High Fibre Tape Drive
Available
#8241
LTO Ultrium 5 Half High SAS Tape Drive
Available
#8348
LTO Ultrium 6 Half High Fibre Tape Drive
Available
#8341
LTO Ultrium 6 Half High SAS Tape Drive
Available
#EU03
RDX 3.0 Removable Disk Docking Station
Available
Option descriptions are as follows:
DAT160 160 GB Tape Drives: With SAS or USB interface options and a data transfer rate up to 12 MBps (assumes 2:1 compression), the DAT160 drive is read/write compatible with DAT160, and DDS4 data cartridges.
LTO Ultrium 5 Half-High 1.5 TB SAS Tape Drive: With a data transfer rate up to 280 MBps (assuming a 2:1 compression), the LTO Ultrium 5 drive is read/write compatible with LTO Ultrium 5 and 4 data cartridges, and read-only compatible with Ultrium 3 data cartridges. By using data compression, an LTO-5 cartridge can store up to 3 TB of data.
LTO Ultrium 6 Half-High 2.5 TB SAS Tape Drive: With a data transfer rate up to 320 MBps (assuming a 2.5:1 compression), the LTO Ultrium 6 drive is read/write compatible with LTO Ultrium 6 and 5 media, and read-only compatibility with LTO Ultrium 4. By using data compression, an LTO-6 cartridge can store up to 6.25 TB of data.
DVD-RAM: The 9.4 GB SAS Slim Optical Drive with an SAS and USB interface option is compatible with most standard DVD disks.
RDX removable disk drives: The RDX USB docking station is compatible with most RDX removable disk drive cartridges when it is used in the same operating system. The 7226 offers the following RDX removable drive capacity options:
 – 500 GB (#1107)
 – 1.0 TB (#EU01)
 – 2.0 TB (#EU2T)
Removable RDX drives are in a rugged cartridge that inserts in to an RDX removable (USB) disk docking station (#1103 or #EU03). RDX drives are compatible with docking stations, which are installed internally in IBM POWER6, POWER6+, POWER7, POWER7+, POWER8, and POWER9 processor-based servers, where applicable.
Media that is used in the 7226 DAT160 SAS and USB tape drive features are compatible with DAT160 tape drives that are installed internally in IBM POWER6, POWER6+, POWER7, POWER7+, POWER8, and POWER9 processor-based servers.
Media that is used in LTO Ultrium 5 Half-High 1.5 TB tape drives are compatible with Half-High LTO5 tape drives that are installed in the IBM TS2250 and TS2350 external tape drives, IBM LTO5 tape libraries, and half-high LTO5 tape drives that are installed internally in IBM POWER6, POWER6+, POWER7, POWER7+, POWER8, and POWER9 processor-based servers.
Figure 1-11 shows the IBM System Storage 7226 Multi-Media Enclosure.
Figure 1-11 IBM System Storage 7226 Multi-Media Enclosure
The IBM System Storage 7226 Multi-Media Enclosure offers customer-replaceable unit (CRU) maintenance service to help make the installation or replacement of new drives efficient. Other 7226 components are also designed for CRU maintenance.
The IBM System Storage 7226 Multi-Media Enclosure is compatible with most IBM POWER6, POWER6+, POWER7, POWER7+, POWER8, and POWER9 processor-based systems that offer current level Linux operating systems.
For a complete list of host software versions and release levels that support the IBM System Storage 7226 Multi-Media Enclosure, see IBM System Storage Interoperation Center (SSIC).
 
Note: Any of the existing 7216-1U2, 7216-1U3, and 7214-1U2 multimedia drawers are also supported.
Flat panel display options
The IBM 7316 Model TF4 is a rack-mountable flat panel console kit that can also be configured with the tray pulled forward and the monitor folded up, providing full viewing and keying capability for the HMC operator.
The Model TF4 is a follow-on product to the Model TF3 and offers the following features:
A slim, sleek, and lightweight monitor design that occupies only 1U (1.75 in.) in a 19-inch standard rack.
A 18.5-inch (409.8 mm x 230.4 mm) flat panel TFT monitor with truly accurate images and virtually no distortion.
The ability to mount the IBM Travel Keyboard in the 7316-TF4 rack keyboard tray.
Support for the IBM 1x8 Rack Console Switch (#4283) IBM Keyboard/Video/Mouse (KVM) switches.
#4283 is a 1x8 Console Switch that fits in the 1U space behind the TF4. It is a CAT5-based switch containing eight rack interface (ARI) ports for connecting either PS/2 or USB console switch cables. It supports chaining of servers by using an IBM Conversion Options switch cable (#4269). This feature provides four cables that connect a KVM switch to a system, or can be used in a daisy-chain scenario to connect up to 128 systems to a single KVM switch. It also supports server-side USB attachments.
1.8.10 Original equipment manufacturer rack
The system can be installed in a suitable original equipment manufacturer (OEM) rack if that the rack conforms to the EIA-310-D standard for 19-inch racks. This standard is published by the Electrical Industries Alliance. For more information, see the IBM Knowledge Center at:
The website mentions the following key points:
The front rack opening must be 451 mm wide ± 0.75 mm (17.75 in. ± 0.03 in.), and the rail-mounting holes must be 465 mm ± 0.8 mm (18.3 in. ± 0.03 in.) apart on-center (the horizontal width between the vertical columns of the holes on the two front-mounting flanges and on the two rear-mounting flanges). Figure 1-12 on page 28 is a top view showing the specification dimensions.
Figure 1-12 Top view of rack specification dimensions (not specific to IBM)
The vertical distance between the mounting holes must consist of sets of three holes spaced (from bottom to top) 15.9 mm (0.625 in.), 15.9 mm (0.625 in.), and 12.67 mm (0.5 in.) on-center, making each three-hole set of vertical hole spacing 44.45 mm (1.75 in.) apart on-center. Rail-mounting holes must be 7.1 mm ± 0.1 mm (0.28 in. ± 0.004 in.) in diameter. Figure 1-13 shows the top front specification dimensions.
Figure 1-13 Rack specification dimensions: Top front view
1.9 Hardware Management Console
This section describes the Hardware Management Consoles (HMCs) that are available for Power Systems servers.
1.9.1 New features
Here are some of the new features of the HMCs:
New HMCs are now based on systems with POWER processors.
Intel x86-based HMCs are supported but are no longer available.
Virtual HMCs (vHMCs) are available for x86 and Power Systems virtual environments.
1.9.2 Hardware Management Console overview
Administrators can use the HMC, which is a dedicated appliance, to configure and manage system resources on IBM Power Systems servers. GUI, command-line interface (CLI), or REST API interfaces are available. The HMC provides basic virtualization management support for configuring logical partitions (LPARs) and dynamic resource allocation, including processor and memory settings for selected Power Systems servers.
The HMC also supports advanced service functions, including guided repair and verification, concurrent firmware updates for managed systems, and around-the-clock error reporting through IBM Electronic Service Agent (ESA) for faster support.
The HMC management features help improve server usage, simplify systems management, and accelerate provisioning of server resources by using PowerVM virtualization technology.
The HMC is available as a hardware appliance or as a vHMC. The Power L922 server supports several service environments, including attachment to one or more HMCs or vHMCs. This is the default configuration for servers supporting multiple logical partitions with dedicated resource or virtual I/O.
Here are the HMCs for various hardware architectures:
X86-based HMCs: 7042-CR7, CR8, or CR9
POWER based HMC: 7063-CR1
vHMC on x86 or Power Systems LPARs
Hardware support for customer replaceable units (CRUs) come standard with the HMC. In addition, users can upgrade this support level to IBM onsite support to be consistent with other Power Systems servers.
 
Note:
An HMC or vHMC is required for the Power L922 server to support multiple LPARs.
For a single or full partition system, an HMC or vHMC is optional.
Integrated Virtual Management (IVM) is no longer supported.
For more information about vHMC, see Virtual HMC Appliance (vHMC) Overview.
Figure 1-14 shows the HMC model selections and tier updates.
Figure 1-14 HMC selections
1.9.3 Hardware Management Console code level
The HMC code must be running at Version 9 Release 1 Modification 920 (V9R1M920) or later when you use the HMC with the Power L922 server.
If you are attaching an HMC to a new server or adding a function to an existing server that requires a firmware update, the HMC machine code might need to be updated to support the firmware level of the server. In a dual-HMC configuration, both HMCs must be at the same version and release of the HMC code.
To determine the HMC machine code level that is required for the firmware level on any server, go to Fix Level Recommendation Tool (FLRT) on or after the planned availability date for this product.
FLRT identifies the correct HMC machine code for the selected system firmware level.
 
Note:
Access to firmware and machine code updates is conditional on entitlement and license validation in accordance with IBM policy and practice. IBM might verify entitlement through customer number, serial number electronic restrictions, or any other means or methods that are employed by IBM at its discretion.
HMC V9 supports only the Enhanced+ version of the GUI. The Classic version is no longer available.
HMC V9R1.911.0 added support for managing IBM OpenPOWER systems. The same HMC that is used to manage flexible service processor (FSP)-based enterprise systems can manage the baseboard management controller (BMC) based Power Systems AC and Power Systems LC servers. This support provides a consistent and consolidated hardware management solution.
HMC V9 supports connections to servers that are based on IBM servers that are based on POWER9, POWER8, and POWER7 processors. There is no support in this release for servers that are based on POWER6 processors or earlier.
1.9.4 Two architectures of Hardware Management Console
There are now two options for the HMC hardware: The earlier Intel-based HMCs, and the newer HMCs that are based on an IBM POWER8 processor. The x86-based HMCs are no longer available for ordering, but are supported as an option for managing the Power L922 server.
You may use either architecture to manage the servers. You also may use one Intel-based HMC and one POWER8 processor-based HMC if the software is at the same level.
It is a preferred practice to use the new POWER8 processor-based consoles for server management.
Intel-based HMCs
HMCs that are based on Intel processors that support V9 code are:
7042-CR9
7042-CR8
7042-CR7
7042-CR6 and earlier HMCs are not supported by the Power L922 server.
The 7042-CR9 has the following specifications:
2.4 GHz Intel Xeon processor E5-2620 V3
16 GB (1 x 16 GB) of 2.133 GHz DDR4 system memory
500 GB SATA SFF HDD
SATA CD-RW and DVD-RAM
Four Ethernet ports
Six USB ports (two front and four rear)
One PCIe slot
POWER8 processor-based HMC
The POWER processor-based HMC is machine type and model 7063-CR1. It has the following specifications:
1U base configuration
IBM POWER8 120 W 6-core CPU
32 GB (4 x 8 GB) of DDR4 system memory
Two 2-TB SATA LFF 3.5-inch HDD RAID 1 disks
Rail bracket option for round hole rack mounts
Two USB 3.0 hub ports in the front of the server
Two USB 3.0 hub ports in the rear of the server
Redundant 1 kW power supplies
Four 10-Gb Ethernet Ports (RJ-45) (10 Gb/1 Gb/100 Mb)
One 1-Gb Ethernet port for management (BMC)
All future HMC development will be done for the POWER8 processor-based 7063-CR1 model and its successors.
 
Note: System administrators can remotely start or stop a 7063-CR1 HMC by using ipmitool or WebUI.
1.9.5 Hardware Management Console connectivity to POWER9 processor-based systems’ service processors
POWER9 processor-based systems and their predecessor systems that are managed by an HMC require Ethernet connectivity between the HMC and the server’s service processor. Additionally, to perform an operation on an LPAR, initiate Live Partition Mobility (LPM), or perform IBM Active Memory Sharing operations on PowerVM, you must have an Ethernet link to the managed partitions. A minimum of two Ethernet ports are needed on the HMC to provide such connectivity.
For the HMC to communicate properly with the managed server, eth0 of the HMC must be connected to either the HMC1 or HMC2 ports of the managed server, although other network configurations are possible. You may attach a second HMC to the remaining HMC port of the server for redundancy. The two HMC ports must be addressed by two separate subnets.
Figure 1-15 shows a simple network configuration to enable the connection from the HMC to the server and to allow for dynamic LPAR operations. For more information about HMC and the possible network connections, see IBM Power Systems HMC Implementation and Usage Guide, SG24-7491.
Figure 1-15 Network connections from the HMC to service processor and LPARs
By default, the service processor HMC ports are configured for dynamic IP address allocation. The HMC can be configured as a DHCP server, providing an IP address at the time that the managed server is powered on. In this case, the FSP is allocated an IP address from a set of address ranges that is predefined in the HMC software.
If the service processor of the managed server does not receive a DHCP reply before timeout, predefined IP addresses are set up on both ports. Static IP address allocation is also an option and can be configured by using the Advanced System Management Interface (ASMI) menus.
 
Notes: The two service processor HMC ports have the following features:
1 Gbps connection speed.
Visible only to the service processor. They can be used to attach the server to an HMC or to access the ASMI options from a client directly from a client web browser.
Use the following network configuration if no IP addresses are set:
 – Service processor eth0 (HMC1 port): 169.254.2.147 with netmask 255.255.255.0
 – Service processor eth1 (HMC2 port): 169.254.3.147 with netmask 255.255.255.0
1.9.6 High availability Hardware Management Console configuration
The HMC is an important hardware component. Although Power Systems servers and their hosted partitions can continue to operate when the managing HMC becomes unavailable, certain operations, such as dynamic LPAR, partition migration that uses PowerVM LPM, or the creation of a partition, cannot be performed without the HMC. Power Systems servers may have two HMCs attached to a system, which provides redundancy if one of the HMCs is unavailable.
To achieve HMC redundancy for a POWER9 processor-based server, the server must be connected to two HMCs:
The HMCs must be running the same level of HMC code.
The HMCs must use different subnets to connect to the service processor.
The HMCs must be able to communicate with the server’s partitions over a public network to allow for full synchronization and functionality.
Figure 1-16 shows one possible highly available HMC configuration that manages two servers. Each HMC is connected to one FSP port of each managed server.
Figure 1-16 Highly available HMC networking example.
For simplicity, only the hardware management networks (LAN1 and LAN2) are highly available. However, the open network (LAN3) can be made highly available by using a similar concept and adding a second network between the partitions and HMCs.
For more information about redundant HMCs, see IBM Power Systems HMC Implementation and Usage Guide, SG24-7491.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.145.95.244