Planning
This chapter describes the necessary planning for IBM Platform Computing solutions. The following topics are presented in this chapter:
3.1 Hardware setup for this residency
We performed all the product installations and configurations, use case scenarios, and tests on the cluster infrastructure that is represented in Figure 3-1.
Figure 3-1 Infrastructure that is used in this book
3.1.1 Server nodes
All the nodes that are assigned to the team for this publication are IBM iDataPlex M3 servers with the configuration that is shown in Table 3-1 on page 17.
 
 
 
 
 
 
 
 
 
 
 
 
Table 3-1 IBM iDataPlex M3 server configuration
Server nodes
Host names
i05n36 - i05n68
Processor
Model
Intel Xeon X5670 @2.93GHz
Sockets
2
Cores
6 per socket, 12 total
Memory
Installed
48 GB (6x8 GB) @ 1333 MHz (DDR3)
High speed network
Connections
InfiniBand
Interface name
i05i36-i05i68
InfiniBand switch/adapter
Switch
8 QLogic (QDR, non-blocking configuration)
InfiniBand adapter
QLogic IBA 7322 QDR InfiniBand HCA
Disk drives
Vendor
WD2502ABYS-23B7A
Size
1x 250 GB
System information
BIOS vendor
IBM Corporation
Version
TME152C
IBM Integrated Management Module
YUOO87C
Network switches
There are two networks in the environment: one Ethernet network for management and public access, and one InfiniBand network for message passing.
 
Second Ethernet network: A second Ethernet network is needed to meet the installation requirements of the IBM Platform High Performance Cluster (HPC) and IBM Platform Cluster Manager Advance Edition. For more details about the setup for those products, see 3.1.3, “Infrastructure planning” on page 20.
Directory services
The authentication used a Lightweight Directory Access Protocol (LDAP) server that runs on i05n36 with Network File System (NFS) HOME that is served from i05n36. (Transport Layer Security (TLS) is not enabled on LDAP). To manage (create/modify/delete) users, the LDAP Account Management tool is available at this website:
The tool automatically maps NFS HOME for any new users (if the option is selected on create). We installed pdsh for parallel shell on all nodes. Example 3-1 provides a few examples.
Example 3-1 Parallel shell examples
pdsh -w i05n[36-41,43-68] date # runs date in parallel on all nodes
pdcp -w i05n[36-41,43-68] /etc/ntp.conf /etc # copies local /etc/ntp.conf to /etc on all nodes
Each node also has remote console/power commands. Example 3-2 shows some examples.
Example 3-2 Remote access command examples
rpower i05n[43-68] status # reports status
rcons i05n[43-68] # opens console
3.1.2 Shared storage
We chose IBM General Parallel File System (GPFS) to power the file system that is used in all tests in this book.
It provides a high-performance enterprise file management platform, and it meets our needs to store and forward large amounts of file-based data quickly, reliably, and efficiently. These systems safely support high-performance data and offer consistent access to a common set of data from multiple servers. GPFS can bring together the power of multiple file servers and multiple storage controllers to provide higher reliability, therefore, outperforming single file server solutions.
We created a 300-GB GPFS file system on i05n[36-68]. The Network Shared Disks (NSDs) are logical volumes (on local disk) from nodes i05n[67-68]. You can get the configuration by executing the following commands:
/usr/lpp/mmfs/bin/mmlscluster
/usr/lpp/mmfs/bin/mmlsconfig
/usr/lpp/mmfs/bin/mmlsnsd
The output listings for our cluster configuration are shown in Example 3-3, Example 3-4 on page 19, and Example 3-5 on page 20.
Example 3-3 Output of mmlscluster
[root@i05n67 ~]# /usr/lpp/mmfs/bin/mmlscluster
 
GPFS cluster information
========================
GPFS cluster name: i05i36.pbm.ihost.com
GPFS cluster id: 9306829523410583102
GPFS UID domain: i05i36.pbm.ihost.com
Remote shell command: /usr/bin/ssh
Remote file copy command: /usr/bin/scp
 
GPFS cluster configuration servers:
-----------------------------------
Primary server: i05i36.pbm.ihost.com
Secondary server: i05i37.pbm.ihost.com
 
Node Daemon node name IP address Admin node name Designation
-----------------------------------------------------------------------------------------------
1 i05i36.pbm.ihost.com 129.40.128.36 i05i36.pbm.ihost.com quorum
2 i05i37.pbm.ihost.com 129.40.128.37 i05i37.pbm.ihost.com quorum
3 i05i67.pbm.ihost.com 129.40.128.67 i05i67.pbm.ihost.com
4 i05i68.pbm.ihost.com 129.40.128.68 i05i68.pbm.ihost.com
5 i05i39.pbm.ihost.com 129.40.128.39 i05i39.pbm.ihost.com
6 i05i40.pbm.ihost.com 129.40.128.40 i05i40.pbm.ihost.com
7 i05i41.pbm.ihost.com 129.40.128.41 i05i41.pbm.ihost.com
8 i05i42.pbm.ihost.com 129.40.128.42 i05i42.pbm.ihost.com
9 i05i43.pbm.ihost.com 129.40.128.43 i05i43.pbm.ihost.com
10 i05i44.pbm.ihost.com 129.40.128.44 i05i44.pbm.ihost.com
11 i05i45.pbm.ihost.com 129.40.128.45 i05i45.pbm.ihost.com
12 i05i46.pbm.ihost.com 129.40.128.46 i05i46.pbm.ihost.com
13 i05i47.pbm.ihost.com 129.40.128.47 i05i47.pbm.ihost.com
14 i05i48.pbm.ihost.com 129.40.128.48 i05i48.pbm.ihost.com
15 i05i49.pbm.ihost.com 129.40.128.49 i05i49.pbm.ihost.com
16 i05i50.pbm.ihost.com 129.40.128.50 i05i50.pbm.ihost.com
17 i05i51.pbm.ihost.com 129.40.128.51 i05i51.pbm.ihost.com
18 i05i52.pbm.ihost.com 129.40.128.52 i05i52.pbm.ihost.com
19 i05i53.pbm.ihost.com 129.40.128.53 i05i53.pbm.ihost.com
20 i05i54.pbm.ihost.com 129.40.128.54 i05i54.pbm.ihost.com
21 i05i55.pbm.ihost.com 129.40.128.55 i05i55.pbm.ihost.com
22 i05i56.pbm.ihost.com 129.40.128.56 i05i56.pbm.ihost.com
23 i05i57.pbm.ihost.com 129.40.128.57 i05i57.pbm.ihost.com
24 i05i58.pbm.ihost.com 129.40.128.58 i05i58.pbm.ihost.com
25 i05i59.pbm.ihost.com 129.40.128.59 i05i59.pbm.ihost.com
26 i05i60.pbm.ihost.com 129.40.128.60 i05i60.pbm.ihost.com
27 i05i61.pbm.ihost.com 129.40.128.61 i05i61.pbm.ihost.com
28 i05i62.pbm.ihost.com 129.40.128.62 i05i62.pbm.ihost.com
29 i05i63.pbm.ihost.com 129.40.128.63 i05i63.pbm.ihost.com
30 i05i64.pbm.ihost.com 129.40.128.64 i05i64.pbm.ihost.com
31 i05i65.pbm.ihost.com 129.40.128.65 i05i65.pbm.ihost.com
32 i05i66.pbm.ihost.com 129.40.128.66 i05i66.pbm.ihost.com
33 i05i38.pbm.ihost.com 129.40.128.38 i05i38.pbm.ihost.com
Example 3-4 shows the output of the mmlsconfig command.
Example 3-4 Output of mmlsconfig
[root@i05n67 ~]# /usr/lpp/mmfs/bin/mmlsconfig
Configuration data for cluster i05i36.pbm.ihost.com:
----------------------------------------------------
myNodeConfigNumber 3
clusterName i05i36.pbm.ihost.com
clusterId 9306829523410583102
autoload no
minReleaseLevel 3.4.0.7
dmapiFileHandleSize 32
adminMode central
 
File systems in cluster i05i36.pbm.ihost.com:
---------------------------------------------
/dev/fs1
Example 3-5 shows the output of the mmlsnsd command.
Example 3-5 Output of mmlsnsd
[root@i05n67 ~]# /usr/lpp/mmfs/bin/mmlsnsd
 
File system Disk name NSD servers
---------------------------------------------------------------------------
fs1 i05i67nsd i05i67.pbm.ihost.com
fs1 i05i68nsd i05i68.pbm.ihost.com
 
General Parallel File System (GPFS): GPFS on a Logical Volume Manager (LVM) logical volumes is not supported, but there is a small “as is” suggestion that can be applied to make a working environment. This suggestion proves to be useful in small environments, which are usually for testing purposes.
Create the /var/mmfs/etc/nsddevices file (on each NSD server) to define eligible devices for NSD:
#!/bin/bash
minor=$(lvs -o lv_name,lv_kernel_major,lv_kernel_minor 2>/dev/null | awk '/ gpfslv / { print $3 }' 2>/dev/null)
echo "gpfslv dmm"
exit 1
Create a GPFS logical volume (LV) on each NSD server:
lvcreate -n gpfslv -L 150G rootvg
Create the /dev node (GPFS needs the device node to be defined directly under /dev):
ln -s /dev/rootvg/gpfslv /dev/gpfslv
From this point, you can follow the GPFS Quick Start Guide for Linux:
For more details about GPFS and other possible configurations, see Implementing the IBM General Parallel File System (GPFS) in a Cross-Platform Environment, SG24-7844:
3.1.3 Infrastructure planning
The initial infrastructure is subdivided to accommodate the requirements of the different IBM Platform Computing products. And, the initial infrastructure is subdivided to enable the team to test different scenarios without having to worry about conflicting software components.
Figure 3-2 on page 21 shows the environment that is configured for our IBM Platform HPC installation. The installation requires two separate Ethernet networks: One public Ethernet network and one private Ethernet network to the cluster. To satisfy this requirement, we installed an additional 1-GB Ethernet switch to provide a separate subnet for the private cluster. For details about IBM Platform HPC, see Chapter 6, “IBM Platform High Performance Computing” on page 181.
Figure 3-2 IBM Platform HPC setup
Figure 3-3 shows the environment that is configured for our IBM Platform Load Sharing Facility (LSF) and Symphony cluster installation. For details about IBM Platform LSF, see Chapter 4, “IBM Platform Load Sharing Facility (LSF) product family” on page 27. For details about IBM Platform Symphony, see Chapter 5, “IBM Platform Symphony” on page 111.
Figure 3-3 IBM Platform LSF and IBM Platform Symphony cluster setup
Figure 3-4 shows the environment that we configured for our IBM Platform Cluster Manager Advance Edition cluster installation. As the IBM Platform HPC, IBM Platform Cluster Manager Advance Edition requires a two-network setup. The additional Ethernet switch is used again to provide another separate subnet for the private cluster.
For details about IBM Platform Cluster Manager Advance Edition, see Chapter 7, “IBM Platform Cluster Manager Advanced Edition” on page 211.
Figure 3-4 IBM Platform Cluster Manager Advance Edition cluster setup
3.2 Software packages
The list of the software that is used and the package file paths that relate to /gpfs/fs1/install follow:
IBM Platform HPC V3.2:
 – Description
Base software to install the master node for the IBM Platform HPC environment
 – Package files:
PlatformHPC/hpc-3.2-with-PCM.rhel.iso
IBM Platform Cluster Manager Advanced Edition V3.2:
 – Description
Base software to install the master and provisioning hosts for the IBM Platform Cluster Manager Advanced Edition environment
 – Package files:
 • PCMAE/pcmae_3.2.0.0_agent_linux2.6-x86_64.bin
 • PCMAE/pcmae_3.2.0.0_mgr_linux2.6-x86_64.bin
IBM Platform Symphony V5.2:
 – Description
Software packages for installing an IBM Platform Symphony cluster. The installation might use either *.rpm or *.bin for the installation, depending on the preferred method. In our environment, RPM is used.
 – Package files:
 • Symphony/egocomp-lnx26-lib23-x64-1.2.6.rpm
 • Symphony/ego-lnx26-lib23-x64-1.2.6.rpm
 • Symphony/soam-lnx26-lib23-x64-5.2.0.rpm
 • Symphony/symcompSetup5.2.0_lnx26-lib23-x64.bin
 • Symphony/symSetup5.2.0_lnx26-lib23-x64.bin
IBM Platform LSF V8.3:
 – Description
Base software for installing an IBM Platform LSF cluster with the following add-ons:
 • IBM Platform Application Center
 • IBM Platform Process Manager
 • IBM Platform RTM
Also under LSF/multihead are the packages for an IBM Platform LSF cluster when it is installed in a previously configured Symphony cluster.
 – Package files:
 • LSF/lsf8.3_licsched_lnx26-libc23-x64.tar.Z
 • LSF/lsf8.3_linux2.6-glibc2.3-x86_64.tar.Z
 • LSF/lsf8.3_lsfinstall_linux_x86_64.tar.Z
 • LSF/lsf8.3_lsfinstall.tar.Z
 • LSF/PAC/pac8.3_standard_linux-x64.tar.Z
 • LSF/PPM/ppm8.3.0.0_ed_lnx26-lib23-x64.tar.Z
 • LSF/PPM/ppm8.3.0.0_fm_lnx26-lib23-x64.tar.Z
 • LSF/PPM/ppm8.3.0.0_pinstall.tar.Z
 • LSF/PPM/ppm8.3.0.0_svr_lnx26-lib23-x64.tar.Z
 • LSF/RTM/adodb492.tgz
 • LSF/RTM/cacti-plugin-0.8.7g-PA-v2.9.tar.gz
 • LSF/RTM/php-snmp-5.3.3-3.el6_1.3.x86_64.rpm
 • LSF/RTM/plugin%3Aboost-v4.3-1.tgz
 • LSF/RTM/plugin%3Aclog-v1.6-1.tgz
 • LSF/RTM/plugin%3Anectar-v0.34-1.tgz
 • LSF/RTM/plugin%3Asettings-v0.71-1.tgz
 • LSF/RTM/plugin%3Asuperlinks-v1.4-2.tgz
 • LSF/RTM/plugin%3Asyslog-v1.22-2.tgz
 • LSF/RTM/rtm-datapoller-8.3-rhel6.tar.gz
 • LSF/RTM/rtm-server-8.3-rhel6.tar.gz
 • LSF/multihead/lsf-linux2.6-glibc2.3-x86_64-8.3-199206.rpm
 • LSF/multihead/lsf8.3_documentation.tar.Z
 • LSF/multihead/lsf8.3_documentation.zip
 • LSF/multihead/lsf8.3_lsfinstall.tar.Z
 • LSF/multihead/lsf8.3_sparc-sol10-64.tar.Z
 • LSF/multihead/lsf8.3_win-x64.msi
 • LSF/multihead/lsf8.3_win32.msi
 • LSF/multihead/lsf8.3_x86-64-sol10.tar.Z
 • LSF/multihead/patch/lsf-linux2.6-glibc2.3-x86_64-8.3-198556.tar.gz
 • LSF/multihead/patch/ego1.2.6_win-x64-198556.msp
 • LSF/multihead/patch/readme_for_patch_Symphony_5.2.htm
 • LSF/multihead/perf-ego-dbschema.tar
 • LSF/multihead/perf-lsf-dbschema.tar
Hadoop V1.0.1:
 – Description
Software to install the Hadoop Distributed File System (HDFS) and MapReduce Hadoop cluster that work with the IBM Platform Symphony MapReduce Framework.
 – Package file:
Hadoop/hadoop-1.0.1.tar.gz
Sun Java Development Kit (JDK) V1.6.0_25:
 – Description
Java runtime environment for Hadoop
 – Package file:
Java/jdk1.6.0_25.tar
Oracle Database Express Edition V11.2.0:
 – Description
Oracle database and client to be used by IBM Platform Cluster Manager Advanced Edition to store and retrieve data for system operations.
 – Package files:
 • Oracle/oracle-xe-11.2.0-1.0.x86_64.rpm.zip
 • Oracle/oracle-xe-11.2.0-1.0.x86_64.rpm
 • Oracle/oracle-instantclient11.2-sqlplus-11.2.0.2.0.x86_64.rpm
 • Oracle/oracle-instantclient11.2-basic-11.2.0.2.0.x86_64.rpm
RedHat Enterprise Linux V6.2 or V6.3:
 – Description
ISO images that are used by IBM Platform HPC and IBM Platform Cluster Manager Advanced Edition to build images and templates for compute nodes.
 – Package files:
 • RHEL62/RHEL6.2-20111117.0-Server-x86_64-DVD1.iso
 • RHEL63/RHEL6.3-20120613.2-Server-x86_64-DVD1.iso
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.191.168.8