IBM Cloud Object Storage Gen2 hardware appliances
This chapter provides an overview of the IBM Cloud Object Storage second-generation (Gen2) hardware appliances. This chapter has the following sections:
3.1 Gen2 hardware appliance overview
In May 2019 IBM introduced the second generation hardware appliances for IBM Cloud Object Storage (IBM COS). The new appliances provide better performance, higher density, more flexibility, and more cost-effective scaling than the existing first-generation models. The new appliances use the same server hardware components across all three functions: Manager, Accesser, and Slicestor nodes. This design makes installation, configuration, monitoring, management, and troubleshooting simpler.
3.1.1 Highlights
IBM Cloud Object Storage Gen2 hardware delivers the following benefits:
25% - 50% lower cost than first-generation hardware based on workload and use cases
Up to 15% more reads and writes can be completed in the same timeframe, versus the present generation of hardware
Consolidated solution that supports up to 1.27 PB in a single node, and 10.18 PB in a single rack
Only acquire the amount of performance or capacity initially required, and grow performance or capacity as needed, together or independently
New servers in the IBM COS Gen2 hardware product line include support for the following components:
The latest Intel Xeon processors (scalable processor family)
Higher performance serial-attached SCSI (SAS) controllers
Higher speed memory registered dual in-line memory modules (RDIMMs)
The Gen2 architecture provides the storage layer using a controller node server and disk enclosure combination, including support for:
Higher capacity in less rack units (RUs)
More capacity in a single node: 1.27 PB with 12 TB drives in 5 RU
Adding capacity independently of performance
All improvements mean that clients can start with small systems, in both the areas of capacity and performance, and grow as needed in independent steps if that is what is appropriate for them.
All of the capabilities, features, functions and benefits of IBM COS that clients are used to today are maintained in this generation of hardware:
The ability to scale-out to multi-petabyte or multi-exabyte capacity with faster performance
Single-site, mirrored two-site, and geographically dispersed deployment options
Configuration that provides up to 8 nines of availability and up to 15 nines of reliability
Clients will also receive further benefits:
Lower cost
Higher performance
More options to tailor configurations for workload optimization
Greater consolidation of data in a single rack unit or a rack configuration
The ability to more easily and cost effectively grow their systems as needed
The ability to intermix Gen2 hardware with previous generation or 3rd-party hardware
 
IBM Cloud Object Storage Gen2 announcement letter:
You can access the IBM Cloud Object Storage Gen2 announcement letter at this link:
3.2 Appliance details
IBM Cloud Object Storage Gen2 hardware is both an extension and an enhancement of the existing hardware. It is an extension because it provides the same functionality using the same IBM COS software, and works with the existing generation, requiring nothing special.
Clients can intermix past and present generations of hardware in the same system, even at the storage pool level. For example, an existing client that has an IBM COS system today that contains a Manager 3105 node, two Accesser 3105 nodes, and four Slicestor 2212A nodes could add a set of four Gen2 Slicestor nodes, as shown in Figure 3-1.
Figure 3-1 Gen1 system expansion with Gen2 Slicestor appliances
IBM COS have been designed to support exabytes of data in a single namespace, and this generation supports that and also more data per rack unit and per rack. With this new generation, systems have increased capacity by 27% per rack unit, as a single rack can now hold more than 10 PB.
IBM COS architecture does not change in the Gen2 hardware as the storage system is still provided by a Manager, Accesser, and a Slicestor function. The Manager and Accesser appliances are very much consistent with the previous Manager and Accesser appliances.
The key difference in Gen2 is in how the storage layer is architected. In the previous generation, the storage layer is provided by high-density server appliances. The performance components and the storage components in the storage layer are in the same physical box, and therefore must be installed, grown, and expanded together.
In Gen2, the storage layer is divided into two separate components, a controller node that contains the performance components and the disk enclosures that contains the storage components. In both the previous generation and in Gen2, the storage layer is called a Slicestor node or appliance, and the functional components, component names, and system management remains exactly the same. This ensures that the same IBM COS software is used across both the previous generation and Gen2.
The Gen2 server appliances are all based on the same 1U rack server with slightly different configuration for each function. The following new appliances are introduced at the time of the product launch:
IBM Cloud Object Storage Manager M10
IBM Cloud Object Storage Accesser A10
IBM Cloud Object Storage Slicestor 12
IBM Cloud Object Storage Slicestor 53
IBM Cloud Object Storage Slicestor 106
 
Note: The new Slicestor appliances include an IBM Cloud Object Storage Controller Node C10 and a Small, Medium, or Large Disk Enclosure. At the time of the product launch a single disk enclosure can be connected to the controller node.
The Gen2 server appliances are based on Intel Xeon Scalable Processor family (formerly Skylake), giving the platform increased performance over the previous generation. The platform also enables deployment of more memory and multiple CPUs into the server appliances, which makes scaling up performance within the boxes possible in the future.
The appliances have two optical 10 GbE ports integrated on the motherboard that come with short-wave SFP+ transceivers as default. The intelligent platform management interface (IPMI), for out-of-band hardware management, has a dedicated 1 GbE port with an RJ45 connector. The VGA and the two USB ports, situated on the rear of the nodes, can be used for console connection. The IPMI port also supports a virtual console.
 
Tip: The IPMI interface is set to DHCP by default. The setting can be modified in IBM COS nut shell by the ipmi command or on the web-based IPMI GUI (the default username and password are admin and admin).
All appliances are equipped with redundant, hot-swappable power supplies and cooling fans. The power supplies can be replaced from the rear, the cooling fans can be accessed by removing the central cover from the top of the server appliance.
 
Tip: The IBM COS Manager monitors the health of the underlying hardware components, and alerting can be set up in the Manager GUI. IBM COS also allows SNMP monitoring. The appliances can be connected to an SNMP server and send alerts in case of failures. SNMP polling is supported as well.
3.2.1 The Manager appliance
The IBM Cloud Object Storage Manager M10 (Manager M10) is the successor of the existing Manager 3105. Table 3-1 includes the new machine types and models.
Table 3-1 Gen2 Manager machine type and model
Machine type
Model
Description
4958/4957
M10
IBM Cloud Object Storage Manager M10
The Manager M10 contains a RAID controller that protects the OS drives from a single drive failure. The node can host up to 10 disks, but initially only two slots on the left side of the chassis are populated with the OS drives.
The Gen2 Manager appliance is designed to support up to 4,500 nodes in a single IBM Cloud Object Storage System™.
Figure 3-2 depicts the front of the Manager M10.
Figure 3-2 IBM Cloud Object Storage Manager M10 front view
Figure 3-3 shows the rear of the Manager M10 with the available ports.
Figure 3-3 IBM Cloud Object Storage Manager M10 rear view
3.2.2 The Accesser appliance
The IBM Cloud Object Storage Accesser A10 (Accesser A10) is the successor of the existing Accesser appliance models. Table 3-2 includes the new machine types and models:
Table 3-2 Gen2 Accesser machine type
Machine type
Model
Description
4958/4957
A10
IBM Cloud Object Storage Accesser A10
The main difference between the Accesser A10 and the other Gen2 appliances is that it has by default twice the memory and a more powerful, Xeon Gold processor in it that allows high throughput for Accesser-related functions. Figure 3-4 shows the front of the Accesser A10.
Figure 3-4 IBM Cloud Object Storage Accesser A10 front view
Figure 3-5 shows the rear of the Accesser A10 with the available ports.
Figure 3-5 IBM Cloud Object Storage Accesser A10 rear view
3.2.3 The Slicestor appliances
The new IBM Cloud Object Storage Slicestor appliances are the successors of the existing Slicestor models with similar or slightly higher disk capacity. Table 3-3 includes the new machine types and models:
Table 3-3 Gen2 Slicestor machine types
Machine type
Model
Description
4958/4957
C10
IBM Cloud Object Storage Controller Node C10
(Controller Node C10)
4958/4957
J10
IBM Cloud Object Storage Small JBOD Chassis
(Small Disk Enclosure or Small J10 Disk Enclosure)
4958/4957
J11
IBM Cloud Object Storage Medium JBOD Chassis
(Medium Disk Enclosure or Medium J11 Disk Enclosure)
4958/4957
J12
IBM Cloud Object Storage Large JBOD Chassis
(Large Disk Enclosure or Large J12 Disk Enclosure)
The Gen2 Slicestor appliances consist of a controller node and disk enclosures. This architecture provides more flexibility in scaling the IBM COS system in the future, because capacity and performance can be scaled independently in the storage layer:
If additional capacity and performance is needed, new Slicestor nodes (which include both controller nodes and disk enclosers) in a new device set could be installed, as shown in Figure 3-6 on page 49.
Figure 3-6 Adding capacity and performance to an existing Gen2 system
If the system performance is sufficient and only more capacity is needed, additional disk enclosures could be added to the existing Slicestor appliances. This allows more cost-effective capacity expansions, because only additional disks and enclosures but no compute resources have to be purchased. Addition of new disk enclosures can be seen in Figure 3-7.
Figure 3-7 Adding capacity by installing a new disk enclosure for the existing Gen2 Slicestor nodes
All disk enclosures support the same type of 3.5 inch, hot-swappable NL-SAS drives. The Gen2 Slicestor appliances overview can be seen in Figure 3-8.
Figure 3-8 Gen2 Slicestor appliance overview
Note: At product launch, only a single disk enclosure can be connected to the controller node. A controller node can handle only a single disk enclosure type (mixing of different enclosures in the same device set is not allowed).
IBM Cloud Object Storage Controller Node C10
The IBM Cloud Object Storage Controller Node C10 (Controller Node C10) is a 1U rack server, based on the same hardware as the Gen2 Manager and Accesser appliances. The main difference is that the controller node has a 16 channel SAS HBA installed in it that enables connection of the disk enclosures, as shown in Figure 3-10. The controller node connects to the disk enclosure with redundant MiniSAS HD cables. The controller node has two OS disks built in which are mirrored by the operating system.
 
Tip: The supplied MiniSAS HD cables are 5 meters long, typically the controller node and the disk enclosure are to be installed in the same rack.
Figure 3-9 depicts the front of the Controller Node C10.
Figure 3-9 IBM Cloud Object Storage Controller Node C10 front view
Figure 3-10 shows the rear of the Controller Node C10.
Figure 3-10 IBM Cloud Object Storage Controller Node C10 rear view
Note: Controller Node C10 has sufficient memory to run the Embedded Accesser feature, which allows a low-cost option for small configurations, when high performance is not a requirement.
Small Disk Enclosure
The Small Disk Enclosure (also known as Small J10 Disk Enclosure or Small JBOD Chassis) is the lowest capacity, 12-bay, 2U enclosure, which makes it possible to configure an entry-level IBM COS system with 72 TB of usable capacity. The Small Disk Enclosure requires all drives to be populated. The Slicestor 12 utilizing this enclosure is the successor of the Slicestor 2212 and 2212A models.
The minimum raw capacity of the 12-bay enclosure is 48 TB, but it can be as high as 144 TB when using 12 TB disk drives. The drives are situated at the front of the enclosure, and can be hot-swapped without moving or sliding out the chassis.
Note: IBM continuously certifies new types of and larger capacity drives. At the time of publishing of this book, the largest available drive size is 12 TB. Larger capacity drives are expected to be available soon.
The Small Disk Enclosure has a different drive carrier compared to the 53 and 106-bay enclosures, so drives are not interchangeable.
Figure 3-11 depicts the front of the Small Disk Enclosure.
Figure 3-11 IBM Cloud Object Storage Small Disk Enclosure front view
At the front left side of the enclosure, there is an operator’s panel, which provides basic diagnostics functions. The overview of the panel can be seen in Figure 3-12.
Figure 3-12 Operator’s panel of the Small Disk Enclosure
Figure 3-13 shows the rear of the Small Disk Enclosure with a single I/O module.
 
Note: There are three SAS ports on the enclosure, but only the left two ports (A&B) are used to connect it with the controller node via MiniSAS HD cables.
Figure 3-13 IBM Cloud Object Storage Small Disk Enclosure rear view
The power and cooling modules (PCM) are located at the rear, on the sides of the enclosure. A single PCM can supply power and cooling for the enclosure if the second module fails. The PCM has its own LEDs to provide fault information, as displayed in Figure 3-14 on page 53. PCMs are hot-pluggable and replacement only takes a few seconds.
Figure 3-14 Power and cooling module
Attention: Never remove the failed PCM, unless the replacement is available. Replace the failed component within a few seconds.
Figure 3-15 depicts the drive enumeration of the Small Disk Enclosure.
Figure 3-15 Small Disk Enclosure drive enumeration
Medium and Large Disk Enclosures
The Medium Disk Enclosure (also known as Medium J11 Disk Enclosure or Medium JBOD Chassis) and the Large Disk Enclosure (also known as Large J12 Disk Enclosure or Large JBOD Chassis) are using the same enclosure chassis, drives, fan and power modules and other components. In this section, we describe the similarities and the differences between the enclosures.
Medium Disk Enclosure capacity
The Medium Disk Enclosure is the recommended enclosure for high-performance, higher density configurations. The enclosure is 4U and can host up to 53 hot-swappable NL-SAS drives. The Slicestor 53 using this enclosure, although it has slightly more drives, is the successor of the Slicestor 2448 and 3448 models.
The minimum raw capacity of the 53-bay enclosure is 212 TB when fully populated with 4 TB drives, and the maximum capacity is 636 TB when using 12 TB disks. The drives can be added or removed from the top of the chassis. During installation, extra care is required for proper cabling, because the chassis needs to slide out in order to swap failed drives.
Large Disk Enclosure capacity
The Large Disk Enclosure is the recommended enclosure for high-density configurations. The enclosure can host up to 106 hot-swappable NL-SAS drives in only 4 rack units. The Slicestor 106 using this enclosure is the successor of Slicestor 2584 model.
The minimum raw capacity of the 106-bay enclosure is 424 TB when fully populated with 4 TB drives, and the maximum capacity is 1,272 TB when using 12 TB disks.
 
Important: Although the 53 and 106-bay 4U disk enclosures show physical similarities, the Medium Disk Enclosure cannot host more than 53 drives, and cannot be upgraded to a 106-bay enclosure. If more than 53 drives are required in a single enclosure, Slicestor 106/Large Disk Enclosure has to be deployed.
Physical view and modules
Figure 3-16 depicts the front of the Medium or Large Disk Enclosure.
Figure 3-16 IBM Cloud Object Storage Medium or Large Disk Enclosure front view
The enclosure has a small LED panel at the lower left hand corner in the front, which displays basic diagnostics information, as shown on Figure 3-17. The fault LED’s name indicates where the failed component is located, or which lid has to be removed to access it.
Figure 3-17 Front LED panel of Medium or Large Disk Enclosure
Figure 3-18 shows the rear of the Medium or Large Disk Enclosure. Note that initially the enclosure will be supplied with a single I/O module on the right.
Figure 3-18 IBM Cloud Object Storage Medium or Large Disk Enclosure rear view
 
Attention: During the planning phase, make sure that at least 1,200 mm (47.2 inch) deep racks are available, and that there is adequate space between the rack rows. The enclosure is quite heavy, and it is advised to install it into lower rack positions. Never install the enclosure in the top positions of an empty rack, because the rack may tip over if the enclosure slides out.
The enclosure comes with a cable management arm, which enables the enclosure to slide out and the top lids to be removed in case a disk has to be replaced. Figure 3-19 shows the proper cabling. Both enclosure types are connected to the controller node via 4 MiniSAS HD cables regardless of the drive count. Thus the Medium Disk Enclosure has twice the bandwidth per drive when compared to the larger model.
Figure 3-19 Cable management arms and cabling
Drive population
The enclosure and the drives are packaged separately in the factory due to the high weight. During installation, the enclosure has to be racked first, and the drives should be inserted afterwards.
At the time of the product announcement, only full population of the enclosures is supported.
The Medium Disk Enclosure actually has 106 disk slots, but only 53 drives are to be inserted. Figure 3-20 illustrates the correct drive population for the Medium Disk Enclosure, when fully populated.
Figure 3-20 Full drive population for the Medium Disk Enclosure (yellow highlighted slots to be populated with drives)
Figure 3-21 depicts the drive enumeration of the Large Disk Enclosure.
Figure 3-21 Large Disk Enclosure drive enumeration
3.3 Appliance specifications
Figure 3-22 contains the Gen2 server appliances’ specifications.
Figure 3-22 Gen2 server appliances’ specification
Figure 3-23 contains the disk enclosures’ specifications:
Figure 3-23 Gen2 disk enclosures’ specifications
 
Tip: Slicestor 53 and 106 appliances require 1,200 mm (47.2 inch) deep racks. The IBM Enterprise Slim Rack (MTM 7965-S42) can be used with the rear rack extender (feature code: ECRK) installed. More information about the rack and the rear extender can be found at the following IBM Knowledge Center links: https://www.ibm.com/support/knowledgecenter/8247-42L/p8had/p8had_7965s_kickoff.htm and https://www.ibm.com/support/knowledgecenter/8247-42L/p8had/p8had_7965s_fcs.htm
3.4 Performance
The 2nd generation appliances incorporate recent CPU, memory, SAS controller, and drive technologies that generally result in higher overall performance. In this section, we briefly compare the appliance’s individual performance characteristics to the previous generation models. For more comprehensive performance sizing details, see 2.2, “Planning for performance” on page 31, or consult an IBM COS subject matter expert.
3.4.1 Accesser performance
The Gen2 Accesser A10 uses a newer generation CPU and one and a half times more memory compared to the Accesser 3105. When using 10 GbE connections, this results in up to 15% higher performance for both read and write operations.
3.4.2 Slicestor performance
Slicestor 12 uses a newer generation CPU, has a 16-channel disk controller versus an 8-channel controller, and utilizes NL-SAS drives instead of SATA drives in Slicestor 2212A. For write operations, this results in up to 10% better performance compared to the Slicestor 2212A. For reads, there is no significant gain because the small number of spindles in the chassis is the limiting factor in these models.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.129.63.13