Server installation
This chapter provides instructions to install and configure IBM Parallel Environment Developers Edition (PEDE) server component on supported operating systems. We also mention tuning tips and customizations for High Performance Computing (HPC) clusters.
The following details are covered in this chapter:
Addition software for integration with PEDE:
 – Job schedulers (IBM TWS LoadLeveler)
 – Distributed file systems (IBM GPFS)
 – Environment control (environment modules)
 – Software revision control tools (GIT or CVS)
PEDE Install instructions (AIX and Linux)
Post-Installation tuning:
 – Quick Parallel Environment Runtime tuning
 – GPFS tunable parameters affecting HPC performance
 – HPC Cluster verification
 – Environment customization (environment modules, shell)
4.1 Software requirements
This section describes the software requirements for IBM PE Developer Edition server component in the three supported operating systems. “Supported operating systems (software)” on page 3 shows the supported operating systems that are available for IBM PE Developer Edition. Table 4-1shows the software packages that are required to install IBM PE Developer Edition server component.
Table 4-1 Software requirements for PEDE server component
AIX 7.1
RHEL 6.2
(on Power)
RHEL 6.2
(on x86_64)
SLES 11 SP2
(on x86_64)
 
compat-libstdc++
(ppc and ppc64)
compat-libstdc++
(32 bit and 64 bit)
libstdc++-devel
libstdc++43-devel
 
libgcc
(ppc and ppc64)
libgcc
(32 bit and 64 bit)
libgcc
(32 and 64 bit)
 
libstdc++
(ppc and ppc64)
libstdc++
(32 bit and 64 bit)
libstdc++33-32 bit
libstdc++33 (64 bit)
 
libstdc++-devel
(ppc and ppc64)
libstdc++-devel
(32 bit and 64 bit)
libstdc++43-32 bit
libstdc++43 (64 bit)
 
libXp
libXp
 
 
openmotif
openmotif
 
IBM XLC/C++ Compilers
(12.1 or later)
IBM XLF Compilers
(14.1 or later)
IBM XLC/C++ Compilers
(12.1 or later)
IBM XLF Compilers
(14.1 or later)
(compiler option 1)
GNU Compilers
(compiler option 1)
GNU Compilers
 
 
(compiler option 2)
Intel Compilers
(11.1 or later)
(compiler option 2)
Intel Compilers
(11.1 or later)
RSCT and SRC
RSCT and SRC
SRC
SRC
Parallel Environment Runtime
Parallel Environment Runtime
Parallel Environment Runtime
Parallel Environment Runtime
 
Install targets: The software requirements in Table 4-1 are focused in compilation nodes. Compute nodes might not require the same full software packages installed, only the runtime packages, depending on what purpose they carry.
4.2 PEDE packaging considerations
This section details IBM PE Developer Edition server component packaging and presents additional software possible to integrate with supported environments.
4.2.1 Package contents
The IBM PE Developer Edition server component is distributed as a single package that is available using DVD media with the following contents:
IBM International Program License Agreement in multi-language booklet (LC23-5123), and its License Information (L-RHAN-8KEP76) in multiple languages (ppe.loc.license)
Product readme file that describes the program's specified operating environment and program specifications
Documentation for the HPC Toolkit (hpct_guide.pdf)
Installation document for Eclipse PTP (ptp_inst_guide.pdf)
Program packages:
 – PTP Eclipse (ppedev.ptp, ppedev.ptp.rte)
 – HPC Toolkit (ppedev.hpct, ppedev.rte)
The previous contents are valid for all supported distributions, as detailed in Table 4-2.
Table 4-2 IBM PEDE server program packages for respective supported operating systems
Operating system
Name
Description
AIX 7.1
ppedev.loc.license
IBM PEDE License
ppedev.ptp.rte
PTP Runtime
ppedev.ptp
PTP Framework
ppedev.rte
IBM HPC Toolkit Runtime
ppedev.hpct
IBM HPC Toolkit
RHEL 6.2
(on Power)
ppedev_license-1.2.0-0.ppc64.rpm
IBM PEDE License
ppedev_ptp_rte_rh6p-1.2.0-0.ppc64.rpm
PTP Runtime
ppedev_ptp_rh6p-1.2.0-0.ppc64.rpm
PTP Framework
ppedev_runtime_rh6p-1.2.0-0.ppc64.rpm
IBM HPC Toolkit Runtime
ppedev_hpct_rh6p-1.2.0-0.ppc64.rpm
IBM HPC Toolkit
RHEL 6.2
(on x86_64 systems)
ppedev_license-1.2.0-0.x86_64.rpm
IBM PEDE License
ppedev_ptp_rte_rh6x-1.2.0-0.x86_64.rpm
PTP Runtime
ppedev_ptp_rh6x-1.2.0-0.x86_64.rpm
PTP Framework
ppedev_runtime_rh6x-1.2.0-0.x86_64.rpm
IBM HPC Toolkit Runtime
ppedev_hpct_rh6x-1.2.0-0.x86_64.rpm
IBM HPC Toolkit
SLES 11 SP2
(on x86_64 systems)
ppedev_license-1.2.0-0.x86_64.rpm
IBM PEDE License
ppedev_ptp_rte_sles11x-1.2.0-0.x86_64.rpm
PTP Runtime
ppedev_ptp_sles11x-1.2.0-0.x86_64.rpm
PTP Framework
ppedev_runtime_sles11x-1.2.0-0.x86_64.rpm
IBM HPC Toolkit Runtime
ppedev_hpct_sles11x-1.2.0-0.x86_64.rpm
IBM HPC Toolkit
 
IBM PEDE clients: They are in the /opt/ibmhpc/ppedev.ptp/eclipse directory. Further details for the supported operating systems and the available program packages are in “Supported operating systems (software)” on page 3.
4.2.2 Additional software
In this section, we list software tools that enrich the IBM PE Developer Edition experience. This software is not included in the IBM PE Developer Edition server package and can be either an IBM product or Open Source software. The Open Source software can be obtained by compiling the application code or in binary format along with the operating system that distributes it. Table 4-3 details the names of these tools and their corresponding program packages for the following areas:
Job schedulers
Distributed file systems
Environment control tools
Software revision control tools
 
Package versions: The software package versions presented in Table 4-3 are examples of supported versions. For complete support details, consult corresponding online product support.
Table 4-3 Program package names for respective operating systems
Operating system
Tool
Name
AIX 7.1
IBM TWS LoadLeveler
LoadL.resmgr.full
LoadL.resmgr.loc.license
LoadL.resmgr.msg.en_US
LoadL.scheduler.full
LoadL.scheduler.loc.license
LoadL.scheduler.msg.en_US
LoadL.scheduler.webui
IBM GPFS
gpfs.base
gpfs.msg.en_US
gpfs.docs.data
Environment Modules
(need compilation from source)
Git
git-4.3.20-4
CVS
cvs-1.11.17-3
RHEL 6.2
(on Power)
IBM TWS LoadLeveler
LoadL-scheduler-full-RH6-PPC64-5.1.0.10-0.ppc64
LoadL-utils-RH6-PPC64-5.1.0.10-0.ppc64
LoadL-resmgr-full-RH6-PPC64-5.1.0.10-0.ppc64
LoadL-full-license-RH6-PPC64-5.1.0.0-0.ppc64
IBM GPFS
gpfs.base-3.5.0-3.ppc64
gpfs.gpl-3.5.0-3.noarch
gpfs.docs-3.5.0-3.noarch
gpfs.msg.en_US-3.5.0-3.noarch
Environment Modules
environment-modules-3.2.7b-6.el6.ppc64
Git
git-1.7.1-2.el6_0.1.ppc64
CVS
cvs-1.11.23-11.el6_0.1.ppc64
SLES 11 SP2
(64 bit)
IBM TWS LoadLeveler
LoadL-full-license-SLES11-X86_64-5.1.0.4-0
LoadL-scheduler-full-SLES11-X86_64-5.1.0.11-0
LoadL-resmgr-full-SLES11-X86_64-5.1.0.11-0
IBM GPFS
gpfs.base-3.4.0-11
gpfs.gplbin-2.6.32.12-0.7-default-3.4.0-11
gpfs.msg.en_US-3.4.0-11
Environment Modules
(need compilation from source)
Git
git-core-1.6.0.2-7.26
CVS
cvs-1.12.12-144.21
 
Eclipse (synchronized projects): A software control system is required to use these projects. Git or CVS is available from the Linux distributions respective repositories and from the AIX Toolbox for the AIX operating systems:
Job schedulers
Job schedulers improve the use of cluster resources and enables queued job execution.
IBM TWS LoadLeveler
IBM Tivoli Workload Scheduler LoadLeveler enables HPC clusters to integrate job scheduling with Parallel Operating Environment (POE) runtime. It is a licensed product and can be obtained from IBM to AIX, RHEL, and SUSE operating systems.
Distributed file systems
Distributed file systems enable distributed jobs to run and debug within a shared file system. They are also used to simplify user data management and increase cluster IO performance.
IBM GPFS
The IBM General Parallel File System is a high performance and scalable distributed file system. It is a licensed product and can be obtained from IBM for AIX, RHEL, and SUSE operating systems.
Environment control tools
Environment control tools enable customized environments to be selected on demand when building Eclipse projects.
Environment modules
The Open Source software that is useful to create different compilation environments are:
Availability: UNIX and Linux
License: GNU GPL v2
Download options:
 – Operating system distributed (compiled)
Software revision control tools
Software revision control tools add control for synchronized and remote Eclipse projects.
Git
Availability: POSIX compatible operating systems (UNIX, Linux and Windows)
License: GNU GPL v2
Download options:
 – Operating system distributed
Concurrent Versions System
Availability: UNIX, Linux and Windows
License: GNU GPL v2
Download options:
 – Operating system distributed
4.3 Installation
This section describes IBM PE Developer Edition server installation for supported operating systems. We list installation instructions for the following operating systems, for the Login/Front End node:
AIX 7.1
RHEL 6 (on Power)
SLES 11 SP2 or RHEL 6.2 (x86_64)
When using a cluster for HPC purposes, the packages do not need to be installed on every single node. Table 4-4 describes where you need to install each package, depending on the node type.
Table 4-4 IBM PE Developer Edition server installation layout
Components
Compute nodes (w/disk)
Compute nodes (diskless)
Login/Front End nodes
HPC Toolkit Runtime
Install
Install
Install
HPC Toolkit (~75MB)
No need to install
No need to install
Install
PTP Framework Runtime
Install (if using PTP debugger)
Install (if using PTP debugger)
Install
PTP Framework (~1.5 GB)
No need to install
No need to install
Install (here or somewhere else)
 
PTP Framework: This package only needs to be installed once and in only one server. This package contains all of the supported PTP client packages.
4.3.1 AIX 7.1
After all requirements are met from Table 4-1 on page 48, follow the next steps to install PEDE over AIX 7.1:
1. Copy all files to a directory named <images_directory>
2. Install the IBM HPC Toolkit runtime: installp -a -X -Y -d <images_directory> ppedev.rte
3. Install the PTP runtime: installp -a -X -Y -d <images_directory> ppedev.ptp.rte
4. Install the PTP framework: installp -a -X -Y -d <images_directory> ppedev.ptp
5. Install the IBM HPC Toolkit: installp -a -X -Y -d <images_directory> ppedev.hpct
4.3.2 RHEL 6 (on IBM POWER)
After all requirements are met from Table 4-1 on page 48, follow the next steps to install PEDE on RHEL 6:
1. Install the license: rpm -hiv ppedev_license-1.2.0-0.ppc64.rpm
 
License installation: After installing the license package, the rpm command displays informative text, as shown in Example 4-1.
Example 4-1 License acceptance procedures
IBM PE Developer Edition License RPM is installed. To accept the LICENSE please run:
/opt/ibmhpc/ppedev.hpct/lap/accept_ppedev_license.sh
Before calling accept_ppedev_license.sh, you must set the IBM_PPEDEV_LICENSE_ACCEPT
environment variable to one of the following values:
yes = Automatic license acceptance.
no = Manual license acceptance.
 
2. Export the license agreement: export IBM_PPEDEV_LICENSE_ACCEPT=yes
3. Accept the license: /opt/ibmhpc/ppedev.hpct/lap/accept_ppedev_license.sh
4. Install the PTP runtime: rpm -hiv ppedev_ptp_rte_rh6p-1.2.0-0.ppc64.rpm
5. Install the PTP framework: rpm -hiv ppedev_ptp_rh6p-1.2.0-0.ppc64.rpm
6. Install the IBM HPC Toolkit runtime: rpm -hiv ppedev_runtime_rh6p-1.2.0-0.ppc64.rpm
7. Install the IBM HPC Toolkit: rpm -hiv ppedev_hpct_rh6p-1.2.0-0.ppc64.rpm
 
HPC Toolkit: Step 7 fails with dependencies requirements if libXp and openmotif rpms are not installed, as detailed in Table 4-1 on page 48.
4.3.3 SLES 11 SP2 or RHEL 6.2 (x86_64)
After all requirements are met from Table 4-1 on page 48, follow the next steps to install PEDE over SLES 11 SP3 or RHEL 6.2:
1. Install the license: rpm -hiv ppedev_license-1.2.0-0.x86_64.rpm
 
License installation: After installing the license package, the rpm command displays the text shown in Example 4-2.
Example 4-2 License acceptance procedures for SLES 11 SP2
IBM PE Developer Edition License RPM is installed. To accept the LICENSE please run:
/opt/ibmhpc/ppedev.hpct/lap/accept_ppedev_license.sh
Before calling accept_ppedev_license.sh, you must set the IBM_PPEDEV_LICENSE_ACCEPT
environment variable to one of the following values:
yes = Automatic license acceptance.
no = Manual license acceptance.
 
2. Export the license agreement: export IBM_PPEDEV_LICENSE_ACCEPT=yes
3. Accept the license: /opt/ibmhpc/ppedev.hpct/lap/accept_ppedev_license.sh
o For SLES 11 SP2:
a. Install the PTP runtime: rpm -hiv ppedev_ptp_rte_sles11x-1.2.0-0.x86_64.rpm
b. Install the PTP framework: rpm -hiv ppedev_ptp_sles11x-1.2.0-0.x86_64.rpm
c. Install the IBM HPC Toolkit runtime:
rpm -hivppedev_runtime_sles11x-1.2.0-0.x86_64.rpm
d. Install the IBM HPC Toolkit: rpm -hiv ppedev_hpct_sles11x-1.2.0-0.x86_64.rpm
o For RHEL 6.2:
a. Install the PTP runtime: rpm -hiv ppedev_ptp_rte_rh6x-1.2.0-0.x86_64.rpm
b. Install the PTP framework: rpm -hiv ppedev_ptp_rh6x-1.2.0-0.x86_64.rpm
c. Install the IBM HPC Toolkit runtime:
rpm -hiv ppedev_runtime_rh6x-1.2.0-0.x86_64.rpm
d. Install the IBM HPC Toolkit: rpm -hiv ppedev_hpct_rh6x-1.2.0-0.x86_64.rpm
4.4 Post-installation set up
This section describes which actions to take after installing IBM PE Developer Edition. All of the following recommendations are based on user experience, and are therefore subject to change at any time and dependent on the code-developing scenarios. The post-installation described in this section is related to cluster products tuning, system environment configurations, and components customization that work along side with IBM PE Developer Edition:
4.4.1 Quick Parallel Environment Runtime tuning
Because the Parallel Environment Runtime can be installed in different cluster architectures and sizes, a set of tuning parameters must be verified and, if required, changed for each particular case. These actions tend to be time consuming and might also need investigation for advanced tuning. The Parallel Environment Operating (POE) environment delivers a script tool that can quickly evaluate which detected parameters must be changed, as shown in Example 4-3 and Example 4-4. The script path that is valid for AIX, RHEL, and SUSE operating systems respectively are:
/opt/ibmhpc/pecurrent/ppe.poe/bin/pe_node_diag (AIX)
/opt/ibmhpc/pecurrent/base/bin/pe_node_diag (RHEL and SUSE)
Example 4-3 Output from /opt/ibmhpc/pecurrent/base/bin/pe_node_diag (RHEL 6.2 on Power)
# /opt/ibmhpc/pecurrent/base/bin/pe_node_diag
/proc/sys/net/ipv4/ipfrag_low_thresh has 196608 but 1048576 is recommended.
/proc/sys/net/ipv4/ipfrag_high_thresh has 262144 but 8388608 is recommended.
limit for nofiles is [1024],recommended is [4096]
limit for locked address space is [64],recommended is [unlimited]
For Example 4-3, the /etc/security/limits.conf and /etc/sysctl.conf files must be changed to accommodate the recommended parameters values.
Example 4-4 Output from /opt/ibmhpc/pecurrent/ppe.poe/bin/pe_node_diag (AIX
# /opt/ibmhpc/pecurrent/ppe.poe/bin/pe_node_diag
sb_max has 1114112 but 8388608 is recommended.
limit for data is [131072],recommended is [unlimited]
limit for nofiles is [2000],recommended is [4096]
maxuproc has 256 but 1024 is recommended.
In Example 4-4, change the /etc/security/limits, file and execute the following commands to accommodate the recommended values:
chdev -l sys0 -a maxuproc=1024
no -p -o sb_max=8388608
 
 
 
 
 
Verification: After all modifications, always start a new shell and run the script again (pe_node_diag) to ensure that all parameters are changed persistently. Sometimes a reboot is required to activate the changes.
 
Parallel Environment Runtime for AIX (1.1.0): By default uses RSH communication between nodes. To switch to SSH, edit the “/etc/ppe.cfg” file, and change the following line from:
PE_SECURITY_METHOD: COMPAT
to
PE_SECURITY_METHOD: SSH poesec /opt/ibmhpc/pecurrent/base/gnu/lib64/poesec_ossh.so m[t=−1,v=1.1.0]
4.4.2 GPFS tunable parameters affecting HPC performance
If using GPFS within an HPC cluster, performance will be an important factor to improve. Consider changing a set of tunable parameters that depend on the file system size, number of disks, nodes, users, and workload pattern. Along with the current GPFS online documentation, there is additional documentation in the following web site:
Performance impact parameters for HPC workloads (importance):
maxFilesToCache (high workloads)
maxMBpS (InifiniBand networks)
maxReceiverThreads (large number of nodes cluster)
nsdMaxWorkerThreads (large number of NSDs per node)
numaMemoryInterleave (Linux)
pagepool (available memory dependent, random IO and GPFS clients)
prefetchPct (sequential access)
prefetchThreads (high number of NSDs/node)
worker1Threads (high asynchronous or direct IO)
4.4.3 HPC Cluster verifications
This section combines a set of configuration verifications, such as environment variables, startup scripts, and security aspects, to enhance cluster interoperability and reduce the need for troubleshooting in case of a software problem.
SSH:
 – Check if file ~/.ssh/known_hosts is populated with all nodes. If not, use ssh-keyscan.
Parallel Environment Runtime:
 – Create the hosts.list file in your home directory.
 – Check if the /etc/hosts.equiv or home directory .rhosts has all the node names and if they all are resolvable to IPs (If using “PE_SECURITY_METHOD: COMPAT”).
LoadLeveler
 – Create mpd.hosts file in your home directory.
GPFS
 – Use mmchconfig to tune GPFS, using default GPFS configurations will not be ideal for HPC clusters.
4.4.4 Customizing the environment
Developing with Eclipse permits the user to directly customize the building environment from the GUI. Although it can be simple to use, for bigger or higher complexity projects, we can customize the operating system environment to increase interoperability between users. There are examples on how to:
Create new modules using the environment modules tool.
Do the shell customization.
Using the environment modules tool (RHEL 6.2)
This tool features creation and management of modules to differentiate compilation environments, such as different versions of the compilers.
 
Note: The tool source code is also available for compilation in UNIX systems. The install directory can be different from the one illustrated here (only for RHEL 6.2).
Creating new modules
It is possible to create modules based on a specific format, as detailed in Example 4-5.
Example 4-5 How to module from environment-modules
# cat /usr/share/Modules/modulefiles/use.own
#%Module1.0#####################################################################
##
## use.own modulefile
##
## modulefiles/use.own. Generated from use.own.in by configure.
##
proc ModulesHelp { } {
global rkoversion
 
puts stderr " This module file will add $HOME/privatemodules to the"
puts stderr " list of directories that the module command will search"
puts stderr " for modules. Place your own module files here."
puts stderr " This module, when loaded, will create this directory"
puts stderr " if necessary."
puts stderr " Version $rkoversion "
}
 
module-whatis "adds your own modulefiles directory to MODULEPATH"
 
# for Tcl script use only
set rkoversion 3.2.7
 
eval set [ array get env HOME ]
set ownmoddir $HOME/privatemodules
 
# create directory if necessary
if [ module-info mode load ] {
if { ! [ file exists $ownmoddir ] } {
file mkdir $ownmoddir
set null [open $ownmoddir/null w]
puts $null "#%Module########################################################################"
puts $null "##"
puts $null "## null modulefile"
puts $null "##"
puts $null "proc ModulesHelp { } {"
puts $null " global version"
puts $null ""
puts $null " puts stderr " This module does absolutely nothing.""
puts $null " puts stderr " It's meant simply as a place holder in your""
puts $null " puts stderr " dot file initialization.""
puts $null " puts stderr " Version $version ""
puts $null "}"
puts $null ""
puts $null "module-whatis "does absolutely nothing""
puts $null ""
puts $null "# for Tcl script use only"
puts $null "set version 3.2.7"
}
}
 
module use --append $ownmoddir
Configuring the environment modules tool:
 – Default directory for the configuration modules: /usr/share/Modules/modulefiles/
 – Modules directory can be changed in the file: /usr/share/Modules/init/.modulespath
 – In the modules directories, several other directories can be separately created
Modules shell initialization:
 – Uses /etc/profile.d/modules.sh (or .csh) script to initialize modules
 – Compatible shells: bash, csh, ksh, perl, python, sh, tcsh, and zsh
Modules examples:
 – Null (Example 4-6)
 – Intel MPI compilers (Example 4-7)
 – Intel C/C++ and Fortran compilers (Example 4-8 on page 59)
Example 4-6 Delivered null module (does nothing)
# cat /usr/share/Modules/modulefiles/null
#%Module1.0#####################################################################
##
## null modulefile
##
## modulefiles/null. Generated from null.in by configure.
##
proc ModulesHelp { } {
global version
puts stderr " This module does absolutely nothing."
puts stderr " It's meant simply as a place holder in your"
puts stderr " dot file initialization."
puts stderr " Version $version "
}
module-whatis "does absolutely nothing"
# for Tcl script use only
set version "3.2.8"
Example 4-7 shows the Intel MPI compilers module.
Example 4-7 Intel MPI compilers module
# cat intel/impi-4.0.2.003
#%Module -*- tcl -*-
##
## dot modulefile
##
proc ModulesHelp { } {
global intelversion
puts stderr " Adds 64-bit Intel MPI to your environment"
}
module-whatis "Adds 64-bit Intel MPI to your environment"
# for Tcl script use only
set intelversion 4.0.2.003
prepend-path PATH /opt/intel/impi/$intelversion/intel64/bin
prepend-path LD_LIBRARY_PATH /opt/intel/impi/$intelversion/intel64/lib
Example 4-8 shows the Intel C/C++ and Fortran compilers module.
Example 4-8 Intel C/C++ and Fortran compilers module
# cat intel/compilers-11.1.073
#%Module -*- tcl -*-
##
## dot modulefile
##
proc ModulesHelp { } {
global intelversion
puts stderr " Adds 64-bit Intel C/C++ and Fortran compilers to your environment"
}
module-whatis "Adds 64-bit Intel C/C++ and Fortran compilers to your environment"
 
prepend-path PATH /opt/intel/Compiler/11.1/073/bin/intel64
prepend-path MANPATH /opt/intel/Compiler/11.1/073/man/en_US
prepend-path LD_LIBRARY_PATH /opt/intel/Compiler/11.1/073/lib/intel64
Shell environment customization (RHEL/SUSE)
Under Linux, there are a couple of ways to provide a better base environment to all users that develop on a specific node (Example 4-9):
1. Add custom profile scripts under /etc/profile.d/. After /etc/profile is called, all files inside the /etc/profile.d/ are called. The shell must be restarted or the file loaded manually with the source command for the script contents to be read.
Example 4-9 How to add GPFS path to all users
# echo “export PATH=$PATH:/usr/lpp/mmfs/bin” >> /etc/profile.d/gpfs.sh
# cat /etc/profile.d/gpfs.sh
export PATH=$PATH:/usr/lpp/mmfs/bin
 
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.15.31.22