Scenario: How to implement the solution components
Now that you understand the pieces of hardware and software that are used to build an IBM Analytics environment, it is time to learn how to install and configure each piece of it.
This chapter provides instructions to install and configure the IBM BigInsights added value services, DB2 BLU, Statistical Package for the Social Sciences (SPSS), and Cognos software.
This chapter covers the following topics:
4.1 Basic infrastructure requirements
For this book, we selected the IBM Open Data Platform on the IBM Data Engine for Analytics (IDEA) solution for our implementation. This solution uses IBM Platform Cluster Manager – Advanced Edition (PCM AE) V4.2.1, with its Extreme Cluster Administration Toolkit (xCAT) function to deploy a cluster of IBM POWER8 servers.
The factory installation process results in a ready-to-run environment for analytic workloads that uses the IBM Open Platform edition of BigInsights V4.1, with IBM Spectrum Scale and IBM Platform Symphony, which comes from the Enterprise Manager component. The IDEA solution is based on the IBM POWER8 S812L and S822L servers, including an Elastic Storage Server (ESS) building block.
Therefore, for this book, we do not cover the hardware setup, PCM installation, or operating system deployment. However, expanding on the solution, we provide instruction to install the IBM BigInsights value-add services, DB2 BLU, SPSS, and Cognos. From the hardware perspective, we added additional POWER8 S822A nodes to support the analytics and e-commerce software. The steps that are detailed in this section can also be applied to other POWER8 environments. For more information, see Chapter 2, “Solution reference architecture” on page 5.
Throughout this chapter, we reference the following hosts. as shown in Table 4-1.
Table 4-1 Hosts that are referenced throughout this chapter
Host name
Hardware
Operating system
Software
smn
Power 812L
Red Hat Enterprise License (RHEL) 7.1
ESS and PCM
mn01
Power 822L logical partition (LPAR)
RHEL 7.1le
IBM Open Platform Manager and Ambari
mn02
Power 822L LPAR
RHEL 7.1le
IBM Open Platform Manager
dn01
Power 822L
RHEL 7.1le
IBM Open Platform Worker
dn02
Power 822L
RHEL 7.1le
IBM Open Platform Worker
dn03
Power 822L
RHEL 7.1le
IBM Open Platform Worker
dn04
Power 822L
RHEL 7.1le
E-commerce
dn05
Power 822L
RHEL 7.1le
DB2
dn06
Power 822A
RHEL 7.1
Cognos
dn07
Power 822A
AIX 7.1
IBM WebSphere® and IBM SPSS
ess01
Power 822L
RHEL 7.1
ESS
ess02
Power 822L
RHEL 7.1
ESS
4.2 Using Ambari to deploy BigInsights with Spectrum Scale
This section covers the complete installation of the IBM Open Platform Edition of BigInsights Version 4.1 by using Ambari. If you use a pre-existing installation of the IBM Open Platform software, such as an IDEA solution, skip to 4.2.5, “Installing the BigInsights value-add packages” on page 73 for information about installing the BigInsights value-add packages.
4.2.1 Understanding supported deployment approaches, including Spectrum Scale with Ambari
Multiple supported deployments for Spectrum Scale with Ambari need to be considered. This section covers an installation of IBM Open Platform Edition on a new cluster (option 1) that uses Spectrum Scale File Placement Optimizer (FPO) technology as an alternative to the factory-installed configuration that is provided with the IDEA solution (option 2) on which the remainder of this section is based. Follow these steps:
1. Create a Spectrum Scale cluster.
Use Ambari to create a new Spectrum Scale cluster by using FPO technology1:
 – Create and configure the new Spectrum Scale cluster, including the designation of the manager and quorum node roles.
 – Set basic Spectrum Scale configuration parameters for your environment, for example, pagepool, maxFilesToCache, maxStatCache, and worker1Threads.
 – Create Network Shared Disks (NSDs) and the file system by using basic or advanced configuration methods.
 – Install Open Platform components and configure the Spectrum Scale Hadoop Connector.
2. Add new BigInsights nodes to an existing Spectrum Scale cluster.
Use Ambari to add new nodes into an existing File Placement Optimizer (FPO) or Elastic Storage Server (ESS) cluster. The installer will perform these tasks:
 – Install Spectrum Scale on the new nodes.
 – Add the new nodes to the existing cluster Spectrum Scale cluster.
 – Ambari will not create any NSDs or file systems. If this cluster is an existing FPO cluster, new NSDs will need to be added manually.
 – Install Open Platform components and configure the Spectrum Scale Hadoop Connector.
3. Add BigInsights to existing Spectrum Scale cluster nodes.
Use Ambari to deploy BigInsights on a pre-existing cluster. In this case, Spectrum Scale is installed in advance and configured on all nodes by using FPO or ESS technology. The Spectrum Scale Hadoop Connector might not be configured yet.
The installer will install Open Platform components and configure the Spectrum Scale Hadoop Connector.
4.2.2 Download software
This section provides details about how to download the software. Follow these steps:
1. We suggest that you create a mirror of the IBM hosted repository on a machine within your enterprise network. With this approach, you instruct Ambari to use that local repository rather than the repository that is hosted in the IBM cloud. This approach is the preferred approach when internet access is restricted, or when you use Spectrum Scale, which requires local repositories. For additional repository approaches, see the following website:
2. For this deployment example, we took advantage of a local http server that was already set up by xCAT in our PCM system management node (smn) to use as a local mirror yum repository.
Your http server must contain a directory for each of the following repositories, as shown in Figure 4-1.
[root@smn ~]# ls /install/repos/
Ambari GPFS IOP IOP-UTILS
Figure 4-1 Repository directories
3. Download the following tar archives for the IBM Open Platform repository by using wget:
 – Ambari:
 – IOP:
 – IOP-UTILS:
https://ibm.biz/Bd4SHG
4. Extract the three repository archives in the repos directory on the smn node, as shown in Figure 4-2.
cd /install/repos
tar xzvf <path to downloaded tar archives>
Figure 4-2 Extracting the repository tar archives
5. Obtain the base installation package files for IBM Spectrum Scale Advanced 4.1.1 Linux POWER8, which can be downloaded from the IBM Passport Advantage® website. Check with your IBM marketing representative or your support team for the URL of this website.
6. Place all of the Spectrum Scale 4.1.1 packages in your GPFS repo directory. Ensure that you remove the gpfs.hadoop-connector rpms and obtain the latest connector package. Download it from this website:
7. You also need to obtain the gpfs-ambari integration package gpfs.ambari-iop_4.1-1.noarch.bin from the http://ibm.co/1RYItoF website and place it in /tmp/ on the management node where you plan to deploy Ambari-server. However, this package will not be a part of the repository for GPFS.
8. Create the local repository metadata, as shown in Figure 4-3.
[root@smn ~]# cd /install/repos/GPFS/rhel/7/ppc64le/4.1.1/
[root@smn 4.1.1]# ls
gpfs.base-4.1.1-0.ppc64le.rpm gpfs.gpl-4.1.1-0.noarch.rpm
gpfs.crypto-4.1.1-0.ppc64le.rpm gpfs.gskit-8.0.50-40.ppc64le.rpm
gpfs.docs-4.1.1-0.noarch.rpm gpfs.hadoop-connector-2.7.0-2.ppc64le.rpm
gpfs.ext-4.1.1-0.ppc64le.rpm gpfs.msg.en_US-4.1.1-0.noarch.rpm
[root@smn 4.1.1]# createrepo .
Figure 4-3 Creating the local GPFS repository metadata
9. Test your local repository by browsing the web directory: http://smn/install/repos.
4.2.3 Set up and install the Ambari server
Follow these steps:
1. On the management node where you plan to deploy Ambari-server, configure access to the new repository, as shown in Figure 4-4.
[root@mn01-dat ~]# cd /etc/yum.repos.d
[root@mn01-dat yum.repos.d]# vi ambari.repo
[root@mn01-dat yum.repos.d]# cat ambari.repo
[BI_AMBARI-2.1.0]
name=ambari-2.1.0
baseurl=http://smn/install/repos/Ambari/rhel/7/ppc64le/2.1.x/GA/2.1/
enabled=1
gpgcheck=0
Figure 4-4 Configure access to the Ambari repository
2. Install the Ambari server, as shown in Figure 4-5.
[root@mn01-dat ~]# yum -y install ambari-server
...
Running transaction
Installing : postgresql-libs-9.2.7-1.ael7b.ppc64le 1/4
Installing : postgresql-9.2.7-1.ael7b.ppc64le 2/4
Installing : postgresql-server-9.2.7-1.ael7b.ppc64le 3/4
Installing : ambari-server-2.1.0_IBM-4.ppc64le 4/4
Verifying : ambari-server-2.1.0_IBM-4.ppc64le 1/4
Verifying : postgresql-libs-9.2.7-1.ael7b.ppc64le 2/4
Verifying : postgresql-server-9.2.7-1.ael7b.ppc64le 3/4
Verifying : postgresql-9.2.7-1.ael7b.ppc64le 4/4
 
Installed:
ambari-server.ppc64le 0:2.1.0_IBM-4
 
Dependency Installed:
postgresql.ppc64le 0:9.2.7-1.ael7b postgresql-libs.ppc64le 0:9.2.7-1.ael7b postgresql-server.ppc64le 0:9.2.7-1.ael7b
 
Complete!
[root@mn01-dat ~]# rpm -qa ambari-server
ambari-server-2.1.0_IBM-4.ppc64le
Figure 4-5 Installing Ambari
3. Execute the gpfs-ambari integration package, as shown in Figure 4-6.
[root@mn01-dat ~]# ./tmp/gpfs.ambari-iop_4.1-1.noarch.bin
International License Agreement for Non-Warranted Programs
Part 1 - General Terms
BY DOWNLOADING, INSTALLING, COPYING, ACCESSING, CLICKING ON AN "ACCEPT" BUTTON, OR OTHERWISE USING THE PROGRAM, LICENSEE AGREES TO THE
TERMS OF THIS AGREEMEN
T. IF YOU ARE ACCEPTING THESE TERMS ON BEHALF OF LICENSEE, YOU REPRESENT AND WARRANT THAT YOU HAVE FULL AUTHORITY TO BIND LICENSEE TO
THESE TERMS. IF YOU DO
NOT AGREE TO THESE TERMS,
* DO NOT DOWNLOAD, INSTALL, COPY, ACCESS, CLICK AN "ACCEPT" BUTTON, OR USE THE PROGRAM; AND
* PROMPTLY RETURN THE UNUSED MEDIA AND DOCUMENTATION TO THE PARTY FROM WHOM IT WAS OBTAINED FOR A REFUND OF THE AMOUNT PAID. IF
THE PROGRAM WAS DOWNLOADED, D
ESTROY ALL COPIES OF THE PROGRAM.
...
Do you agree to the above license terms? [yes or no]
yes
Unpacking...
Done
Installing...
Preparing... ################################# [100%]
Updating / installing...
1:gpfs.ambari-iop_4.1-0 ################################# [100%]
Figure 4-6 Installing GPFS and Ambari integration
 
Important: Do not execute the gpfs-ambari integration package from /root/ because it can introduce problems.
4. Update the value of the openjdk1.8.url and openjdk1.7.url in /etc/ambari-server/conf/ambari.properties to point to your local repository, as shown in Figure 4-7.
[root@mn01-dat ~]# cat /etc/ambari-server/conf/ambari.properties | grep openjdk1.[78].url
openjdk1.8.url=http://smn/install/repos/IOP-UTILS/rhel/7/ppc64le/1.1/openjdk/jdk-1.8.0.tar.gz
openjdk1.7.url=http://smn/install/repos/IOP-UTILS/rhel/7/ppc64le/1.1/openjdk/jdk-1.7.0.tar.gz
Figure 4-7 Updating the Ambari openjdk repository configuration
5. Update your Ambari configuration to point to the new repositories, as shown in Figure 4-8 and in Figure 4-9.
[root@mn01-dat ~]# cd /var/lib/ambari-server/resources/stacks/BigInsights/4.1.SpectrumScale/repos/
[root@mn01-dat repos]# vi repoinfo.xml
Figure 4-8 Updating Ambari repository configuration (part 1 of 2)
<reposinfo>
<mainrepoid>IOP-4.1-Spectrum_Scale</mainrepoid>
<os family="redhat7">
<repo>
<baseurl>http://smn/install/repos/IOP/rhel/7/ppc64le/4.1.x/GA/4.1.0.0/</baseurl>
<repoid>IOP-4.1-mirror</repoid>
<reponame>IOP</reponame>
</repo>
<repo>
<baseurl>http://smn/install/repos/IOP-UTILS/rhel/7/ppc64le/1.1/</baseurl>
<repoid>IOP-UTILS-1.1-mirror</repoid>
<reponame>IOP-UTILS</reponame>
</repo>
<repo>
<baseurl>http://smn/install/repos/GPFS/rhel/7/ppc64le/4.1.1/</baseurl>
<repoid>GPFS-4.1.1</repoid>
<reponame>GPFS</reponame>
</repo>
</os>
</reposinfo>
Figure 4-9 Updating Ambari repository configuration (part 2 of 2)
6. Update params.py in the Ambari server to fix the Spark History Service Permission issue. By default, spark_eventlog_dir_mode is 01777, which will cause a permission issue when you start the Spark History Service. This issue might be fixed in the future. However, in the meantime, you must change spark_eventlog_dir_mode to 0777 (Figure 4-10).
[root@mn01-dat ~]# vi /var/lib/ambari-server/resources/stacks/BigInsights/4.1/services/SPARK/package/scripts/params.py
...
70 spark_hdfs_user_dir = format("/user/{spark_user}")
71 spark_hdfs_user_mode = 0755
72 spark_eventlog_dir_mode = 0777
73 spark_jar_hdfs_dir = "/iop/apps/4.1.0.0/spark/jars"
74 spark_jar_hdfs_dir_mode = 0755
75 spark_jar_file_mode = 0444
76 spark_jar_src_dir = "/usr/iop/current/spark-client/lib"
77 spark_jar_src_file = "spark-assembly.jar"
Figure 4-10 Spark history service permission workaround
7. Run the Ambari server setup, as shown in Figure 4-11.
[root@mn01-dat ~]# ambari-server setup
Using python /usr/bin/python2.7
Setup ambari-server
Checking SELinux...
SELinux status is 'disabled'
Customize user account for ambari-server daemon [y/n] (n)? n
Adjusting ambari-server permissions and ownership...
Checking firewall status...
Redirecting to /bin/systemctl status iptables.service
 
Checking JDK...
[1] OpenJDK 1.8.0
[2] OpenJDK 1.7.0 (deprecated)
[3] Custom JDK
==============================================================================
Enter choice (1): 1
Downloading JDK from http://smn/install/repos/IOP-UTILS/rhel/7/ppc64le/1.1/openjdk/jdk-1.8.0.tar.gz to /var/lib/ambari-server/resources/jdk-1.8.0.tar.gz
jdk-1.8.0.tar.gz... 100% (48.3 MB of 48.3 MB)
Successfully downloaded JDK distribution to /var/lib/ambari-server/resources/jdk-1.8.0.tar.gz
Installing JDK to /usr/jdk64/
Successfully installed JDK to /usr/jdk64/
Completing setup...
Configuring database...
Enter advanced database configuration [y/n] (n)? n
Configuring database...
Default properties detected. Using built-in database.
Configuring ambari database...
Checking PostgreSQL...
Running initdb: This may take upto a minute.
Initializing database ... OK
 
 
About to start PostgreSQL
Configuring local database...
Connecting to local database...done.
Configuring PostgreSQL...
Restarting PostgreSQL
Extracting system views...
ambari-admin-2.1.0_IBM_4.jar
......
Adjusting ambari-server permissions and ownership...
Ambari Server 'setup' completed successfully.
Figure 4-11 Ambari server setup
8. Start the Ambari server, as shown in Figure 4-12.
[root@mn01-dat ~]# ambari-server start
Using python /usr/bin/python2.7
Starting ambari-server
Ambari Server running with administrator privileges.
Organizing resource files at /var/lib/ambari-server/resources...
Server PID at: /var/run/ambari-server/ambari-server.pid
Server out at: /var/log/ambari-server/ambari-server.out
Server log at: /var/log/ambari-server/ambari-server.log
Waiting for server start....................
Ambari Server 'start' completed successfully.
Figure 4-12 Ambari server startup
4.2.4 Deploying the IBM Open Platform edition of BigInsights
This section describes how to deploy the IBM Open Platform edition of BigInsights. Follow these steps:
1. Before the installation, ensure that you configure password-less root access from ambari-server node to all other nodes. This capability is required for Spectrum Scale administration. If you used PCM or xCAT to deploy your nodes, this configuration is performed automatically.
2. You can now connect to the default port 8080 on the server that is running ambari-server through your web browser: http://mn01-dat:8080/. The default account username/password is admin/admin.
 
Tip: The client port can be overridden by setting client.api.port in /etc/ambariserver/conf/ambari.properties.
Changes require that you restart the server with the command ambari-server restart.
3. To launch the Ambari Install Wizard, click Launch Install Wizard, as shown in Figure 4-13.
Figure 4-13 Ambari cluster installation wizard: Pre-cluster view
4. Name your cluster, as shown in Figure 4-14.
Figure 4-14 Ambari cluster installation wizard: Naming your cluster
5. Select the BigInsights 4.1.SpectrumScale stack by clicking BigInsights 4.1.SpectrumScale to pull in the Spectrum Scale services and configuration and the modified configurations for other services to use GPFS instead of HDFS. Click Advanced Repository Options to verify and change your repositories, if necessary (Figure 4-15).
Figure 4-15 Ambari cluster installation wizard: Stack selection
6. Enter information about the nodes in the cluster, as shown in Figure 4-16.
 
Note: For simplicity, the following windows show only two management nodes (julia and emma), and they do not show any data nodes.
Figure 4-16 Ambari cluster installation wizard: Install options
7. Click Register and Confirm. Ambari installs its agent service on each node and runs several basic checks, as shown in Figure 4-17 and in Figure 4-18 on page 64.
Figure 4-17 Ambari cluster installation wizard: Host confirmation
8. Figure 4-18 shows the Host Checks window.
Figure 4-18 Ambari cluster installation wizard: Host Checks
9. Choose Services, as shown in Figure 4-19 on page 66.
10. Assign node roles. For more information about recommendations for two, four, six, or eight management server systems, see Appendix B, “Planning Ambari node roles” on page 267.
 
Important: ResourceManager, Symphony Master, Spark History Server, and Spark Thrift Server must on the same node.
11. The Spectrum Scale Master Node functions as the node where commands that affect the entire cluster will run. For example, when Spectrum Scale is first installed and an FPO cluster is first created, the commands are all executed on the Spectrum Scale Master Node. On the Spectrum Scale Nodes, after the rpms is installed, defer to the Master Node for adding nodes to the cluster, deploying the Hadoop connector, and so on.
As another example, if the configuration changes after the cluster is deployed, the Spectrum Scale Master Node executes the commands to reconfigure the cluster and if necessary, to restart Spectrum Scale on all nodes. The term “Master” is used here only to follow the convention that is used by the other Hadoop services. The Spectrum Scale Master Node has no special role in the Spectrum Scale cluster itself (other than as one of the quorum nodes).
Assign Slaves and Clients. Spectrum Scale has a node role, GPFS Node, which must be deployed to every node (including the Spectrum Scale Master Node).
 
Note: Reference DeployBigInsights4.1_SpectrumScale_with_Ambari 2.1_v0.8.1.pdf at the following website:
12. Figure 4-19 shows how to assign node master roles.
Figure 4-19 Ambari cluster installation wizard: Assign node master roles
13. Figure 4-20 shows how to assign slave and client roles.
Figure 4-20 Ambari cluster installation wizard: Assign node slave and client roles
14. Customize services. Click each service tab that is flagged with a red circle and enter the required database passwords. Optional: Review the configuration for each service, as shown in Figure 4-21.
Figure 4-21 Ambari cluster installation wizard: Customize Services
15. Configure the Spectrum Scale stack. Spectrum Scale has its own tab on the Customize Services page. Within that tab, two tabs, Standard and Advanced configuration, appear. In the Standard tab, the user can adjust parameters through either the slider bars or drop-down menus. The Advanced tab contains the parameters that do not need to be changed frequently. When you create a new cluster, all fields on both pages are populated with initial values that are taken from the recommendations in the following white paper:
16. The parameters that are followed by a lock icon must not be changed after deployment (for example, Cluster name, RemoteShell, /Filesystem Name, Max Data Replicas/). Before you start the deployment, double-check all important parameters.
17. Two types of NSD files are supported for file system creation. The preferred approach is to use the simple format. However, the standard Spectrum Scale NSD file format is also accepted. If you choose to use the simple NSD file format, Ambari will choose the correct metadata and data ratio for you. If possible, Ambari will also create partitions on several disks for Hadoop intermediate data, which can improve Hadoop performance. If you choose to use the standard Spectrum Scale NSD file format, you need to take all responsibility for the arrangement of all storage space.
18. Due to a limitation in the Ambari framework, the NSD file must be placed on the Ambari server under /var/lib/ambari-server/resources/gpfs_nsd. See the gpfs_nsd.sample for an example of the simple format.
19. Review and begin the deployment, as shown in Figure 4-22.
Figure 4-22 Ambari cluster installation wizard: Review deployment
20. Figure 4-23 shows the Install, Start and Test window.
Figure 4-23 Ambari cluster installation wizard: Install and test
21. After the deployment completes, you might need to restart several services, as shown in Figure 4-24.
Figure 4-24 Ambari cluster installation wizard: Install warnings
22. Due to limitations in the Ambari stack, you must install the update to Spectrum Scale 4.1.1.3 manually.
 
Note: For more information about supported versions, see the following website:
23. Extract the Spectrum Scale update rpm packages to /install/repos/GPFS/rhel/7/ppc64le/4.1.1.3/ on the smn node.
24. Ensure that you delete any gpfs.hadoop-connector rpms because the latest rpm is installed with the base version repository. Then, create the yum repository metadata, as shown in Figure 4-25.
[root@smn 4.1.1.3]# ls
gpfs.base-4.1.1-3.ppc64.update.rpm gpfs.ext-4.1.1-3.ppc64.update.rpm gpfs.msg.en_US-4.1.1-3.noarch.rpm
gpfs.crypto-4.1.1-3.ppc64.update.rpm gpfs.gpl-4.1.1-3.noarch.rpm
gpfs.docs-4.1.1-3.noarch.rpm gpfs.gskit-8.0.50-47.ppc64.rpm
[root@smn 4.1.1.3]# createrepo .
Figure 4-25 Spectrum Scale update rpms
25. Create the yum configuration file, as shown in Figure 4-26.
[root@smn 4.1.1.3]# vi /etc/yum.repos.d/GPFS_PTF.repo
[root@smn 4.1.1.3]# cat /etc/yum.repos.d/GPFS_PTF.repo
[GPFS-4.1.1.3]
name=GPFS-4.1.1.3
baseurl=http://smn/install/repos/GPFS/rhel/7/ppc64le/4.1.1.3
path=/
enabled=1
gpgcheck=0
Figure 4-26 Spectrum Scale update yum configuration file
26. Copy the yum configuration file to all nodes, as shown in Figure 4-27.
[root@smn 4.1.1.3]# xdcp __Managed /etc/yum.repos.d/GPFS_PTF.repo /etc/yum.repos.d/GPFS_PTF.repo
Figure 4-27 Spectrum Scale yum repository distribution
27. Stop all services that use the Ambari graphical user interface (GUI), as shown in Figure 4-28.
Figure 4-28 Ambari Actions: Stop All services
28. Ensure that Spectrum Scale was shut down.
29. Proceed to install the updated packages, as shown in Figure 4-29.
[root@smn ~]# ssh julia /usr/lpp/mmfs/bin/mmgetstate -a
 
Node number Node name GPFS state
------------------------------------------
1 julia down
2 emma down
[root@smn ~]# xdsh __Managed "yum -y erase gpfs.gpl gpfs.docs gpfs.gskit gpfs.msg.en_US gpfs.crypto gpfs.ext"
[root@smn ~]# xdsh __Managed "yum -y install
gpfs.base gpfs.ext gpfs.crypto gpfs.gpl gpfs.docs gpfs.gskit gpfs.msg.en_US"
[root@smn ~]# xdsh __Managed "rpm -qa gpfs.base"
emma: gpfs.base-4.1.1-3.ppc64le
julia: gpfs.base-4.1.1-3.ppc64le
[root@smn ~]# xdsh __Managed "/usr/lpp/mmfs/bin/mmbuildgpl"
...
[root@smn ~]# xdsh __Managed "/usr/lpp/mmfs/bin/mmstartup"
...
[root@smn ~]# ssh julia /usr/lpp/mmfs/bin/mmgetstate -a
 
Node number Node name GPFS state
------------------------------------------
1 julia active
2 emma active
Figure 4-29 Spectrum Scale update installation
 
Tip: If Spectrum Scale is still active, you need to run the mmshutdown command. Consider a rolling upgrade if you cannot shut down the entire cluster at one time. For more information, see the GPFS FPO Cluster Maintenance Guide on the Big Data Best practices page of the IBM Spectrum Scale Wiki:
30. Start all services in the Ambari GUI.
4.2.5 Installing the BigInsights value-add packages
This section provides the steps to install the BigInsights value-add packages. Follow the steps:
1. Plan ahead. Before you add the servers, check the suggested layout:
2. Download IBM BigInsights Analyst 4.1.0.1 for Linux on Power (bana-1.1.0.0.el7.ppc64le.bin).
3. Download IBM BigInsights for Apache Hadoop 4.1.0.1 for Linux on Power (bah-1.1.0.0.el7.ppc64le.bin).
4. Ensure that the eAssemby files are executable (chmod a+x *.bin). Execute the programs from a machine with access to the internet. Choose the OFFLINE installation type to download BI-ANA-RHEL7.tar.gz and BI-ANA-RHELIOP.tar.gz.
5. Extract the contents of both archives into /install/repos/BigInsights-Valuepacks/RHEL7/ppc64le/4.1.0.1/ on the smn repository node, and run createrepo, as shown in Figure 4-30.
[root@smn ~]# cd /install/repos/BigInsights-Valuepacks/RHEL7/ppc64le/4.1.0.1
[root@smn 2.1]# createrepo .
[root@smn 4.1.0.1]# ls
BI-Analyst-IOP-1.1.0.1-4.1.el7.ppc64le.rpm bigsheets-distrib-5.11.2.rpm repodata
BI-Apache-Hadoop-IOP-1.1.0.1-4.1.el7.ppc64le.rpm bigsql-dist_4_1_0_0-5.28.1-Linux-ppc64le.rpm text-analytics-runtime-4.6.rpm
BigR-4.3.0.5.rpm bigsql-samples_4_1_0_0-5.28.1.rpm text-analytics-web-tooling-3.4.rpm
BigR-BigSQL1-3.4.rpm db2luw-linuxppc64le-10.6.0.3-s150918-db2rpm.rpm web-ui-framework-2.7.rpm
BigR-Jaql-3.2.0.1.rpm dsm-1.1.1.1-N20150908_1239.noarch.rpm
BigR-SystemML-5.4.0.2.rpm jsqsh-4.4-Linux-amd64.rpm
Figure 4-30 BigInsights-Valuepacks repository metadata
6. On the Ambari manager node, set up yum to point to the new repository, as shown in Figure 4-31.
[root@mn01-dat ~]# cat /etc/yum.repos.d/BIGINSIGHTS-VALUEPACK.4.1.repo
[BIGINSIGHTS-VALUEPACK.4.1]
name=BIGINSIGHTS-VALUEPACK.4.1
baseurl=http://smn/install/repos/BigInsights-Valuepacks/RHEL7/ppc64le/4.1.0.1/
 
path=/
enabled=1
Figure 4-31 BigInsights-Valuepacks yum configuration file
7. Install the enablement rpms on the Ambari server, as shown in Figure 4-32.
yum -y install BI-Apache-Hadoop-IOP-2.13.1-IOP-4_1
yum -y install BI-Analyst-IOP-2.13.1-IOP-4_1
Figure 4-32 BigInsights-Valuepacks enablement rpm installation
8. Update your Ambari configuration to point to the new repositories, as shown in Figure 4-33 and in Figure 4-34.
cd /var/lib/ambari-server/resources/stacks/BigInsights/4.1.SpectrumScale/repos/
vi repoinfo.xml
Figure 4-33 Ambari server repository configuration update (part 1 of 2)
<reposinfo>
<mainrepoid>IOP-4.1-Spectrum_Scale</mainrepoid>
<os family="redhat7">
<repo>
<baseurl>http://smn/install/repos/IOP/rhel/7/ppc64le/4.1.x/GA/4.1.0.0/</baseurl>
<repoid>IOP-4.1-mirror</repoid>
<reponame>IOP</reponame>
</repo>
<repo>
<baseurl>http://smn/install/repos/IOP-UTILS/rhel/7/ppc64le/1.1/</baseurl>
<repoid>IOP-UTILS-1.1-mirror</repoid>
<reponame>IOP-UTILS</reponame>
</repo>
<repo>
<baseurl>http://smn/install/repos/GPFS/rhel/7/ppc64le/4.1.1/</baseurl>
<repoid>GPFS-4.1.1</repoid>
<reponame>GPFS</reponame>
</repo>
<repo>
<baseurl>http://smn/install/repos/ValueAdds/</baseurl>
<repoid>BigInsights-ValueAdds-IOP-4.1-mirror</repoid>
<reponame>BI-ValueAdds-IOP-4.1-mirror</reponame>
</repo>
</os>
</reposinfo>
Figure 4-34 Ambari server repository configuration update (part 2 of 2)
 
Tip: Ambari uses a relational database management system (rdbms) to store its data. In this project, the default postgresSQL was used. The entire repoinfo.xml file is ignored by the ambari post-installation if a single repository is invalid. Ensure that you remove or update any invalid entries. The file also does not get updated by the ambari-server if the file was changed dynamically at installation time. If you experience any problems when you load the repoinfo.xml file, check /var/log/ambari-server/ambari-server.log for details. For example, an error that is similar to the following example might appear:
“AmbariManagementControllerImpl:3583 - Could not access base url . http://192.168.9.3/repos/IOP-UTILS/RHEL6/x86_64/1.1 . Network is unreachable”
9. After you update the repoinfo.xml file, restart Ambari, as shown in Figure 4-35.
ambari-server restart
Figure 4-35 Ambari server restart
10. From the Ambari website, click Admin  Stack and Versions. Click the Versions tab, and click the Edit Repositories icon, as shown in Figure 4-36.
Figure 4-36 Ambari Stack and Versions
11. Ensure that the repositories that you updated in repoinfo.xml are displayed, as shown in Figure 4-37.
Figure 4-37 Ambari stack repositories
12. After you install the rpms, you can run the pre-installation check for Big SQL outside of Ambari before you add the services (Figure 4-38).
/var/lib/ambari-server/resources/stacks/BigInsights/4.1/services/BIGSQL/package/scripts/bigsql-precheck.sh -M PRE_ADD_HOST -V -u bigsql
Figure 4-38 BigInsights pre-installation check
13. For more information, see the following website:
 
Note: The script is also executed by Ambari when you add the services.
14. Several pre-check issues are common:
 – Error on sudoers. You need to comment the line: Defaults require tty (on all nodes).
 – FAIL Hosts file check: You might see this error from a pre-installation check that is performed by Big SQL. If you followed the IDEA runbook to set up your cluster, your /etc/hosts file is probably defined in a way that causes hostname -s and hostname -f to return the same short hostname.
15. To fix this error, use the -l (lowercase L) option with the makehosts command on your PCM node to rebuild the /etc/hosts file, as shown in Figure 4-39.
makehosts __Managed,lpar_ess,switch,smn -l
updatenode __Managed,lpar_ess -F
Figure 4-39 xCAT hosts configuration
16. The relevant entries of the corrected /etc/hosts file entries are shown in Figure 4-40.
172.16.1.10 julia.cluster.com julia
172.16.1.11 emma.cluster.com emma
[root@julia ~]# hostname -s
julia
[root@julia ~]# hostname -f
julia.cluster.com
Figure 4-40 /etc/hosts name order
17. Confirm that Hive metastore connectivity exists from the node where Big SQL will be installed, even if Big SQL will be on the same node with Hive. You can test this connectivity by opening the Hive shell from the command line and running a simple command. Perform the steps that are shown in Figure 4-41.
Authenticate to hive:
 
su hive
 
Open the HIVE shell by typing the following command from the command line:
 
hive
 
Run a command such as the following command that displays tables:
 
hive> show tables;
Figure 4-41 Hive metastore connectivity test
18. Perform any remaining steps in the planning guide at the following website:
19. In the Ambari web interface, click Actions → Add Service (Figure 4-42).
Figure 4-42 Ambari Add Service Actions menu
20. Click Add Service Wizard → Choose Services → BigInsights - Data Server Manager → Big Insights Big SQL → Big Insights Home. Figure 4-43 shows the Ambari Add Service selection menu.
Figure 4-43 Ambari Add Service: Selection
21. Assign the master components to the hosts on which you want to run them, as shown in Figure 4-44.
Figure 4-44 Ambari Add Service: Assign Masters
 
Important: When you choose your Big SQL Head and Secondary Head, you cannot run Big SQL Worker and Head services on the same node. In this case, we selected management nodes mn01 and mn02.
22. Figure 4-45 shows the Ambari window to assign slaves and clients.
Figure 4-45 Ambari Add Service: Assign Slaves and Clients
 
Tip: Place the Big SQL workers on the data nodes only. In this example, our data nodes are dn01, dn02, and dn03.
23. Customize the services. In the Customize Services step for BigInsights Big SQL, we defined the password for the bigsql user as cluster.
24. For Data Server Manager, for the dsm_admin_user field, we entered admin. Type a Knox user name to become the administrator for Data Server Manager. For more information about the configuration steps, see the Installing the BigInsights - Data Server Manager topic in the IBM Knowledge Center:
25. Click each service that is flagged in red and complete the required fields, as shown in Figure 4-46.
Figure 4-46 Ambari Add Service: Customize Big SQL
26. Figure 4-47 shows the Ambari Add Service window to customize the BigInsights Data Server Manager.
Figure 4-47 Ambari Add Service: Customize BigInsights Data Server Manager
27. Review and deploy, as shown in Figure 4-48.
Figure 4-48 Review and deploy
28. Figure 4-49 shows the Ambari Install, Start and Test window.
Figure 4-49 Ambari Add Service: Install and Test initial view
29. Figure 4-50 shows the Ambari Add Service window to check the installation and service.
Figure 4-50 Ambari Add Service: Install and test service check
30. During the installation process, if any errors occur, you can review the logs from Ambari, correct the problem, and retry the installation. For example, if you forgot to comment out the require tty line in the /etc/sudoers file from all of the nodes, you will see the error on tty sudo permission.
Also, you might see warnings in the logs that include a warning about the need to restart several affected services.
31. You must run knox_setup.sh to enable Knox for the value-add services. Follow the directions as explained in the “Enabling Knox for value-add service” section of the IBM BigInsights manual at the following website:
If you fail to run the script, you will not find the updated JAR files, which are required by dsm, in the /usr/iop/4.1.0.0/knox/lib/ directory.
As part of the process, the script will restart Ambari and Knox.
32. You must start the Lightweight Directory Access Protocol (LDAP) for authentication. You can use the Knox Demo LDAP for the authentication, but you must start it before you access the BigInsights home url. The default authentication is guest/guest-password.
33. Before you continue with the BigInsights Big SQL configuration, you likely need to reapply a patch to fix a bug in the Ambari web GUI. This bug prevents you from saving core-site changes. See Figure 4-51.
/var/lib/ambari-server/resources/scripts/gpfs_core_site_patch.sh
Figure 4-51 Ambari web user interface (UI) patch
34. Reload the Ambari web interface on the browser.
35. From the Ambari dashboard, restart the HDFS, YARN, MapReduce2, and Big SQL services.
36. Restart the Knox Service. Start the Knox Demo LDAP service if you did not configure your own LDAP.
37. Restart the BigInsights Home services.
38. Follow the remaining steps from the document at the following website:
39. Access the Knox Gateway service by using the following URL:
4.3 DB2 with BLU Acceleration to store structured data
Structured data implies that data elements are stored according to a predefined data model. Sets of entities, tables, and files that are organized into attributes, fields, columns, and lines with predefined data types are examples of data models. A data type is the predefined type, length, and format of stored data. For example, a timestamp format might be represented as YYYY-MM-DD HH:mm:SS, and an instance of that data type is 2015-11-05 11:00:00. Every instance of one entity, table, or file is considered a new record, row, or line.
A spreadsheet, which is an example of structured data, is a set of tables where every cell is an intersection of a column and a row. A variant data type is also considered a data type based on its definition, which is the most common data type of spreadsheet cells.
A relational database, which is another example of structured data, consists of a set of tables that are organized into rows according to the columns’ predefined data types. In addition, relational databases enforce relationship constraints between tables to establish the consistency of data across the database.
Unstructured data has no predefined data model. However, it can be scanned and analyzed to provide the required data. Text data, for example, a digital copy of a contract, is considered unstructured data because the established date of the contract is not necessarily described in a predefined field or format and it can be scanned and found throughout that file.
You might ask yourself about something in between structured and unstructured data. Semi-structured data is unstructured data that is combined with metadata that provides tags and instructions for the position, format, length, or type of a specific data element to be addressed within the unstructured data source.
In this context, IBM DB2 with BLU Acceleration stores structured data correctly and speeds up the analytic workloads of your organization. It delivers unparalleled performance improvements for analytic applications and reporting by using dynamic in-memory optimized columnar technologies. Although the industry is abuzz with discussions about in-memory columnar data processing, BLU Acceleration offers so much more. It delivers significant improvements in database compression but it does not require you to have all your data in memory.
DB2 with BLU Acceleration includes several features that work together to make it a significant advancement in technology. We refer to these features as the BLU Acceleration Seven Big Ideas:
Simplicity and ease of use
Column store
Adaptive compression
Parallel vector processing
Core-friendly parallelism
Scan-friendly memory caching
Data skipping
BLU Acceleration is simple and easy to use. The required effort to deploy and maintain a BLU Acceleration environment is minimal. Advanced technologies, such as columnar compression, parallel vector processing, core-friendly parallelism, scan-friendly memory caching, and data skipping are all used by DB2 automatically without database administrators (DBAs) explicitly deploying auxiliary structures for it to work. It is in the DB2 engine’s nature to process queries by using these technologies.
At the center of BLU Acceleration is the column-organized table store. It is combined with actionable compression that operates on a column and page level to save storage space. The column organization eliminates the need for creating and maintaining secondary indexes and aggregates. In DB2 10.5, both column-organized and traditional row-organized tables can coexist in the same database. For optimal performance, run analytical queries against tables that are all column-organized.
For users who intend to convert existing tables to facilitate their analytic processing needs in a mixed workload environment, we suggest that you choose the use of BLU Acceleration only on those tables that are used purely for analytics. The db2convert utility converts row-organized tables to column-organized tables, while source tables remain accessible online.
With our client base that uses BLU Acceleration, clients experienced an average, conservatively, of approximately 10 times compression on their analytics databases, without any complex tuning. In terms of performance, their queries ran 35 - 73 times faster on average (and several queries are even faster).
For example, workloads with the following characteristics benefit most from BLU Acceleration:
Analytical, data mart workloads
Queries that involve grouping, aggregation, range scans, and joins
Queries that access only a subset of the columns in a table
Star or dimensional schemas
SAP Business Warehouse application workloads
4.3.1 DB2 system requirements
The BLU Acceleration feature in DB2 10.5 is supported on AIX and Linux on Power (Table 4-2). It uses the same DB2 10.5 minimum operating system requirements.
For recent information about DB2 system requirements, consult the general documentation at the following website:
Table 4-2 Suggestions for IBM DB2 with BLU Acceleration for IBM Power Systems
Operating system
Minimum required version
Suggested hardware
Linux little endian
Red Hat Enterprise Linux (RHEL) Server 7.1
SUSE Linux Enterprise Server 12
IBM POWER8 or later
AIX
AIX 6.1 Technology Level (TL) 7
AIX 7.1 TL1
IBM POWER8 or later
For more information about sizing system resources, see Best practices: Optimizing analytic workloads using DB2 10.5 with BLU Acceleration at the following website:
 
Note: For demonstration, we refer to the GOSALES Cognos sample database, which is in the IBM Cognos samples installation. This database can be stored in less than 10 GB, but we suggest at least 10 GB of disk space for the DB2 BLU database to fully reproduce our demonstration environment.
4.3.2 DB2 license requirements and functionality
DB2 for Linux, UNIX, and Windows Version 10.5 is available in multiple product editions. Each edition includes a different feature number and provides the functionality that we describe in this section.
In terms of required license entitlements, the BLU Acceleration feature includes the following DB2 10.5 editions for production environments:
Advanced Enterprise Server Edition (AESE)
Advanced Workgroup Server Edition (AWSE)
Non-production environments can use the following DB2 10.5 edition that also entitles the use of BLU Acceleration:
Developer Edition (DE)
The license files for DB2 Version 10.5 ship separately for convenience so that you can download the license file in less time due to its small size. You need to download the license activation key from Passport Advantage and then install it.
Contact the Passport Advantage eCustomer Care team for assistance if you encounter problems. See the following website:
Ensure that you download the corresponding activation key part number for your IBM DB2 edition. Also, download this part number, CN30CML, IBM DB2 BLU Acceleration In-Memory Offering - Quick Start and Activation 10.5.0.5 for Linux, UNIX, and Windows, which includes a db2baf.lic file that is required for the BLU Acceleration activation.
4.3.3 IBM DB2 with BLU Acceleration deployment
At the time of writing this publication, IBM DB2 version 10.5.0.5 was compatible with Red Hat Enterprise (RHEL) Server 7.1 on Power System (little endian). Therefore, in addition to all of the unique benefits of IBM DB2 with BLU Acceleration on Power Systems, your existing databases do not need to be migrated to a big endian environment.
For the next set of instructions, we assume the use of the same configuration as before for the demonstration.
Install xlC package
We suggest that you configure a yum repository by using packages from the installation CD. Therefore, you can smoothly install and resolve dependencies for xlC package by using the command that is shown in Figure 4-52.
yum install libxlc.ppc64le
Figure 4-52 Using yum install libxlc for RHEL on Power little endian
However, if you are not configuring a yum repository, you can also download the xlc package. Ensure that you download a ppc64le extension, which is the correct package for little endian. Use the following URL to download the package:
Also, you can use the rpm command as an alternative installation method but you must resolve the dependencies manually (Figure 4-53).
rpm -ivh libxlc-13.1.2.0-150526a.ppc64le.rpm
Figure 4-53 The rpm install libxlc for RHEL on Power little endian
Optionally install the libaio package
We suggest that you use the yum command to install the libaio package.
Example 4-1 The yum install libaio output for RHEL on Power Systems little endian
[root@dn05-dat server]# yum install libaio
Loaded plugins: product-id, subscription-manager
This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
GPFS-4.1.1.1 | 2.9 kB 00:00:00
local-rhels7.1-ppc64le-ppc64le | 4.1 kB 00:00:00
Resolving Dependencies
--> Running transaction check
---> Package libaio.ppc64le 0:0.3.109-12.ael7b will be installed
--> Finished Dependency Resolution
 
Dependencies Resolved
 
==============================================================================================================================
Package Arch Version Repository Size
==============================================================================================================================
Installing:
libaio ppc64le 0.3.109-12.ael7b local-rhels7.1-ppc64le-ppc64le 24 k
 
Transaction Summary
==============================================================================================================================
Install 1 Package
Total download size: 24 k
Installed size: 158 k
Is this ok [y/d/N]: y
Downloading packages:
libaio-0.3.109-12.ael7b.ppc64le.rpm | 24 kB 00:00:00
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Warning: RPMDB altered outside of yum.
Installing : libaio-0.3.109-12.ael7b.ppc64le 1/1
Verifying : libaio-0.3.109-12.ael7b.ppc64le 1/1
 
Installed:
libaio.ppc64le 0:0.3.109-12.ael7b
 
Complete!
Download the DB2 installation package
To install the DB2 with BLU Acceleration trial software on your own platforms, download the 90-day no-charge trial software from the Get DB2 with BLU Acceleration website:
For our environment, we use Advanced Enterprise Server Edition (AESE).
 
Note: We suggest that you download the latest fix pack for your own platform at Download DB2 Fix Packs by version for DB2 for Linux, UNIX, and Windows at the following website:
https://ibm.biz/Bd4aMX
Initial setup
Follow these steps:
1. Ensure that you enabled X11 forwarding on both the client side and the server side if you perform this installation remotely.
 
Note: Check whether your network throughput is configured optimally because rendering the X application remotely over Secure Shell (SSH) might slow down due to network latency.
2. To start the setup, log in as the root credential. Go to the directory where you extracted the DB2 installation package and run db2setup, which is an X application. Alternatively, you can use db2_install, which is a console mode setup.
3. For db2setup, on the Welcome window (Figure 4-54), click Install a Product and then on DB2 Version 10.5 Fix Pack 5 Workgroup. For Enterprise and Advanced Editions, click Install New.
Figure 4-54 Install a product as root for DB2 Version 10.5 Fix Pack 5 Workgroup, Enterprise, and Advanced Editions
4. The following options are shown on the left side of Figure 4-55 on page 91:
a. On the first setup window, select 1. Introduction. On the Welcome to DB2 Setup Wizard window, click Next.
b. Click 2. Software License Agreement. Read the agreement carefully and check I accept the terms on the license agreement window. Click Next.
c. Select 3. Installation type. We encourage you to use the typical installation and click Next, but you can customize for your environment.
d. Select 4. Installation action. Keep the default setting because it creates a response file /root/db2server.rsp, which can be reviewed later or used to reinstall on the same or another environment. Click Next.
e. Click 5. Installation directory. We suggest that you keep the default directory, which is /opt/ibm/db2/V10.5, but you can change the installation directory to your own preference. Click Next.
f. Click 6. DAS user. We will not use or refer to any DB2 Administration Server feature in this book. We advise that you check Create the DAS user later. Click Next.
g. Click 7. Instance setup. Check Create a DB2 instance, which is the environment on which you will store data and run applications. You must have an instance to use this product. Click Next.
h. Click 8. Partitioning options. For this demonstration, we do not use multiple partitions. We suggest that you click Single partition instance. If you still want or need to use a multiple partition instance, you must have a Database Partitioning Feature license, also. Click Next.
i. Click 9. Instance-owning user. Create a DB2 instance owner with the default user name, which is db2inst1 under the db2iadm1 group (Figure 4-55) or change it according to your environment’s or organization’s standards. Click Next.
Figure 4-55 Default DB2 instance owner setup
j. Click 10. Fenced user. Use the default setting because a new fenced user will be created. Fenced user-defined procedures (UDFs) and stored procedures will execute under this user and group (Figure 4-56). Or, change it according to your environment’s or organization’s standards. Click Next.
Figure 4-56 Default DB2 fenced user setup
k. Click 11. Notification setup. Select Do not set up your DB2 server to send notification this time, but you can set up your DB2 server to automatically send email or pager notifications to alert administrators when a database needs attention. Use your environment’s or corporate Simple Mail Transfer Protocol (SMTP) server in this latter case. Click Next.
l. Click 12. Summary. Review your settings and click Finish to start the installation.
 
Note: As a preferred practice, install the latest fix pack (FP). At the time of writing this publication, the latest fix pack is FP6. This fix pack can be installed from the previously downloaded and then extracted directory, specifying the path where the DB2 database product was installed, <db2_base_install_path>:
./installFixPack -b <db2_base_install_path>
For example, this command installs the fix pack:
./installFixPack -b /opt/ibm/db2/V10.5
Activate DB2 with BLU Acceleration by using the db2baf.lic license file
After the installation, you are required to activate DB2 BLU Acceleration with the correct DB2 server edition license.
To activate the license, retrieve the db2baf.lic file from the previously downloaded part CN30CML and execute the following command (Example 4-2):
<db2_base_install_path>/adm/db2licm -a db2baf.lic
Example 4-2 Activating the license for DB2 with BLU Acceleration
[root@dn05-dat adm]# /opt/ibm/db2/V10.5/adm/db2licm -a db2baf.lic
 
LIC1402I License added successfully.
 
 
LIC1426I This product is now licensed for use as outlined in your License Agreement. USE OF THE PRODUCT CONSTITUTES ACCEPTANCE OF THE TERMS OF THE IBM LICENSE AGREEMENT, IN THE FOLLOWING DIRECTORY: "/opt/ibm/db2/V10.5/license/en_US.iso88591"
4.3.4 Set up the DB2 instance
To enable the analytics workload and remote TCP/IP connection to the DB2 database, you must update two parameters.
Log in as the instance owner, by default, db2inst1, and execute the following commands:
su - db2inst1
db2set DB2_WORKLOAD=ANALYTICS
db2set DB2COMM=TCPIP
For the DB2_WORKLOAD registry parameter to become active, the DB2 instance must be restarted after you set DB2_WORKLOAD=ANALYTICS. Restart the instance with the following commands:
db2stop
db2start
 
Note: The default port number for db2 instances is 50000. However, that port number might vary depending on your installation method or whether it is in use by another service on the same server.
Therefore, you can check the correct port number for your db2 instance by performing the following command:
db2 get dbm cfg | grep SVCENAME
The output shows the service name. Then, you can verify the port number in the /etc/services file.
4.3.5 GOSALES Cognos Business Intelligence sample database
For our demonstration, create the GOSALES database by using the db2 instance owner credentials according to your environment. This example uses a Spectrum Scale file system to store data files: /bigpfs/dbpath/gs_db:
mkdir /bigpfs/dbpath/gs_db
Create a DB2 database under the previously created directory. As part of a new database creation process, the configuration advisor automatically applies all required BLU Acceleration settings to optimize analytic workloads. Optionally, all required BLU Acceleration settings can also be applied explicitly through the AUTOCONFIGURE keyword as part of the database creation:
db2 CREATE DB GS_DBBLU ON /bigpfs/dbpath/gs_db AUTOCONFIGURE USING mem_percent 80 APPLY DB AND DBM
The expected output of this command is shown in Example 4-3.
Example 4-3 Output of the DB2 create command
[db2inst1@dn05-dat dbpath]$ db2 CREATE DB GS_DBBLU ON /bigpfs/dbpath/gs_db AUTOCONFIGURE USING mem_percent 80 APPLY DB AND DBM
Former and Applied Values for Database Manager Configuration
 
Description Parameter Former Value Applied Value
-------------------------------------------------------------------------------------------------
Application support layer heap size (4KB) (ASLHEAPSZ) = 15 15
No. of int. communication buffers(4KB)(FCM_NUM_BUFFERS) = AUTOMATIC(4096) AUTOMATIC(21580)
Enable intra-partition parallelism (INTRA_PARALLEL) = NO NO
Maximum query degree of parallelism (MAX_QUERYDEGREE) = ANY ANY
Agent pool size (NUM_POOLAGENTS) = AUTOMATIC(100) AUTOMATIC(100)
Initial number of agents in pool (NUM_INITAGENTS) = 0 0
Max requester I/O block size (bytes) (RQRIOBLK) = 65535 65535
Sort heap threshold (4KB) (SHEAPTHRES) = 0 0
 
 
Former and Applied Values for Database Configuration
 
Description Parameter Former Value Applied Value
-------------------------------------------------------------------------------------------------
Default application heap (4KB) (APPLHEAPSZ) = 256 256
Catalog cache size (4KB) (CATALOGCACHE_SZ) = (MAXAPPLS*5) 360
Changed pages threshold (CHNGPGS_THRESH) = 60 80
Database heap (4KB) (DBHEAP) = AUTOMATIC(1200) AUTOMATIC(11605)
Degree of parallelism (DFT_DEGREE) = 1 ANY
Default tablespace extentsize (pages) (DFT_EXTENT_SZ) = 32 32
Default prefetch size (pages) (DFT_PREFETCH_SZ) = AUTOMATIC(32) AUTOMATIC(64)
Default query optimization class (DFT_QUERYOPT) = 5 5
Max storage for lock list (4KB) (LOCKLIST) = 4096 AUTOMATIC(4096)
Log file size (4KB) (LOGFILSIZ) = 1000 1024
Number of primary log files (LOGPRIMARY) = 3 45
Number of secondary log files (LOGSECOND) = 10 19
Max number of active applications (MAXAPPLS) = AUTOMATIC(40) AUTOMATIC(40)
Percent. of lock lists per application (MAXLOCKS) = 10 AUTOMATIC(15)
Number of asynchronous page cleaners (NUM_IOCLEANERS) = AUTOMATIC(24) AUTOMATIC(1)
Number of I/O servers (NUM_IOSERVERS) = AUTOMATIC(196) AUTOMATIC(5)
Package cache size (4KB) (PCKCACHESZ) = (MAXAPPLS*8) AUTOMATIC(2812)
Sort list heap (4KB) (SORTHEAP) = 256 787556
SQL statement heap (4KB) (STMTHEAP) = AUTOMATIC(8192) AUTOMATIC(16384)
Statistics heap size (4KB) (STAT_HEAP_SZ) = AUTOMATIC(4384) AUTOMATIC(4384)
Utilities heap size (4KB) (UTIL_HEAP_SZ) = AUTOMATIC(5000) AUTOMATIC(11434465)
Self tuning memory (SELF_TUNING_MEM) = OFF ON
Automatic runstats (AUTO_RUNSTATS) = ON ON
Sort heap thres for shared sorts (4KB) (SHEAPTHRES_SHR) = 5000 15751124
Log buffer size (4KB) (LOGBUFSZ) = 256 2195
Default table organization (DFT_TABLE_ORG) = ROW COLUMN
Database memory threshold (DB_MEM_THRESH) = 100 100
 
 
Former and Applied Values for Bufferpool(s)
 
Description Parameter Former Value Applied Value
-------------------------------------------------------------------------------------------------
IBMDEFAULTBP Bufferpool size = 1000 1968890
 
 
Former and Applied Values for System WLM Objects
 
Description Former Value Applied Value
-------------------------------------------------------------------------------------------------
Work Action SYSMAPMANAGEDQUERIES Enabled = Y Y
Work Action Set SYSDEFAULTUSERWAS Enabled = Y Y
Work Class SYSMANAGEDQUERIES Timeroncost = 1.50000E+05 1.50000E+05
Threshold SYSDEFAULTCONCURRENT Enabled = N Y
Threshold SYSDEFAULTCONCURRENT Maxvalue = 15 15
 
 
DB210209I The database was created successfully. Please restart the instance
so configuration changes take effect.
Restart the instance for the configuration changes to take effect with the following commands:
db2stop
db2start
4.3.6 SPSS Collaboration and Deployment Services database repository
SPSS Collaboration and Deployment Services version 7.0 is not compatible with DB2 with BLU Acceleration column-organized tables. During the database repository creation process, indexes are created to speed up data retrieval from the repository. However, column-organized tables in DB2 with BLU Acceleration do not require indexes. And, in fact, SPSS Collaboration and Deployment Services does not accept indexes. Therefore, to bypass this problem, the default table organization for the SPSS Collaboration and Deployment Services DB2 database must be set to row-organized tables after the database creation.
 
Important: The SPSS Collaboration and Deployment Services database repository must be created before the installation of the SPSS Collaboration and Deployment Services product.
We suggest that you create a new db2 instance that is optimized for an online transaction processing (OLTP) workload to create the SPSS Collaboration and Deployment Services repository because the previously configured db2inst1 instance, in our scenario, is optimized for an analytic workload to hold the GOSALES database. Even though SPSS is in essence an analytical package, its repository does not use the same analytical workload characteristics. For more information about creating a new instance, consult the db2icrt command syntax.
If you are comfortable reusing the same db2inst1 instance for the SPSS Collaboration and Deployment Services database repository, you can proceed, even though it is not fully optimized for its workload:
db2 CREATE DATABASE spsscds ON /bigpfs/dbpath/spsscds USING CODESET UTF-8 TERRITORY US COLLATE USING SYSTEM
db2 connect to spsscds
db2 "CREATE BUFFERPOOL CDS8K IMMEDIATE SIZE 250 AUTOMATIC PAGESIZE 8 K"
db2 "CREATE REGULAR TABLESPACE CDS8K PAGESIZE 8 K MANAGED BY AUTOMATIC STORAGE EXTENTSIZE 8 OVERHEAD 10.5 PREFETCHSIZE 8 TRANSFERRATE 0.14 BUFFERPOOL CDS8K DROPPED TABLE RECOVERY ON"
db2 "CREATE BUFFERPOOL CDSTEMP IMMEDIATE SIZE 250 PAGESIZE 32 K"
db2 "CREATE SYSTEM TEMPORARY TABLESPACE CDSTEMP PAGESIZE 32K MANAGED BY AUTOMATIC STORAGE EXTENTSIZE 16 OVERHEAD 10.5 PREFETCHSIZE 16 TRANSFERRATE 0.14 BUFFERPOOL CDSTEMP"
Change the default table organization to row-organized tables with the following command:
db2 update db cfg for spsscds using DFT_TABLE_ORG row
4.4 SPSS Analytical Decision Management
This section describes the steps to install SPSS Analytical Decision Management.
4.4.1 Outline of steps
To set up this environment, the following steps are required:
4.4.2 Install the prerequisite items for AIX
In this step, proceed to check and install all of the AIX prerequisites.
4.4.3 Install the Installation Manager
Follow these steps to deploy the Installation Manager:
1. Execute IBMIM.
2. Wait until the IBM Installation Manager wizard opens.
3. Click File  Preferences.
4. Click Add Repository.
5. Click Browse. Select the directory where you stored the installation file and locate the repository files. Click OK.
6. Set all repositories:
 – IBM WebSphere 8.5.5
 – IBM SPSS Collaboration and Deployment Service 7.0.0
 – IBM SPSS Modeler Adopter 17.0
 – IBM SPSS Analytical Decision Management 17.0
7. Check all repositories by clicking Test Connection. Check whether the message “All the selected repositories are connected” is displayed, as shown in Figure 4-57. Click OK.
Figure 4-57 Test Connection
8. Click OK.
4.4.4 Install and configure WebSphere Application Server
This section describes the installation and configuration steps to deploy the application server.
Install WebSphere Application Server
Follow the installation steps:
1. Click Install (Figure 4-58).
Figure 4-58 IBM Installation Manager
2. Select the items that you want to install. Select IBM WebSphere Application Server Version 8.5.5.0. After you select the software, the status changes to “Will be installed”. Click Next.
3. Select I accept the terms in the license agreement for the Licenses process in Installation Packages.
4. Click Next for the Location in the Installation Packages, which is for Shared Resources.
5. For the Directory, click Next. Click Browse. Select the path. Click Next.
6. Click Next for Feature in Installation Package.
7. Click Next for Feature  Translation in Install Package.
8. Click Next for Feature in Install Package.
9. Click Install.
10. Click Next.
11. Check Review Summary for the Summary of the Install Package.
12. Click Install.
13. Click Finish.
Configure
Use the following steps to configure the application server:
1. Click Create.
2. Select Application server for the environment to create, as shown in Figure 4-59. Click Next.
Figure 4-59 WebSphere Environment Selection
3. Figure 4-60 shows the WebSphere profiles. Click Next.
Figure 4-60 WebSphere Profiles
4. Select Typical profile creation, as shown in Figure 4-61. Click Next.
Figure 4-61 WebSphere Profile Creation Options
5. Set the user name and password for administrative security (Figure 4-62). For this example, we set the user name to admin and the password to ibm1ibm. For the other items, leave them as shown. Click Next.
Figure 4-62 WebSphere Administrative Security
6. Check the content in the Profile Creation Summary, as shown in Figure 4-63. If it is correct, click Create.
Figure 4-63 WebSphere Profile Creation Summary
 
Note: In Figure 4-63, the HTTP transport port is 9080. Port 9080 is the default, and it was used in our sample.
7. Click Finish, as shown in Figure 4-64.
Figure 4-64 WebSphere Profile Creation Complete
8. Wait until the First steps window opens, as shown in Figure 4-65. Click Installation verification.
Figure 4-65 WebSphere First steps
9. Confirm that the messages “The installation Verification Tool verification succeeded” and “The Installation verification is complete” display at the bottom of the First steps output - Installation verification window that is shown in Figure 4-66.
Figure 4-66 WebSphere First steps output
10. Close the window.
11. Click Administrative console from the First steps window (Figure 4-65 on page 104) or type http://<servername>:9060/admin in your browser.
12. Log in with the user name admin and the password ibm1ibm.
For this demonstration, we set the user name to admin and the password to ibm1ibm, as shown in Figure 4-67.
Figure 4-67 WebSphere Integrated Solutions Console
13. Check whether your profile was created successfully, as shown in Figure 4-68.
Figure 4-68 WebSphere Application servers
14. Log out of the WebSphere Integrated Solutions Console.
4.4.5 Install and configure the DB2 database
Create the database on DB2, and set up the DB2 client.
4.4.6 Install and configure SPSS Modeler Server
The installation and configuration of the SPSS Modeler Server are described.
Installation
The installation consists of the following steps:
1. Start X Window System (for example, Xming) on your client.
2. Connect to the AIX server.
3. Extract spss_mod_17.0_cndsadp_7.0_aix_ml.zip.
4. You will get spss_mod_svr_17.0_aix_ml.bin.
5. Set IATEMPDIR to a directory where plenty of disk space is available.
6. Execute spss_mod_17.0_cndsadp_7.0_aix_ml.zip.
7. Wait until the Modeler Server installation wizard opens.
8. Click OK for language selection. Click Next.
9. Click the license agreement. Select I accept the terms in the license agreement. Click Next.
10. Select Production mode. Click Next.
11. Set the path where you want to install Modeler Server. Click Next.
12. Wait until installation completes. Click Done.
Sample result
Example 4-4 shows the result of this sample installation.
Example 4-4 Result
unzip spss_mod_17.0_cndsadp_7.0_aix_ml.zip
export IATEMPDIR=/ibmapp/
./spss_mod_17.0_cndsadp_7.0_aix_ml.zip
How to start Modeler Server
To start Modeler Server, execute modelersrv.sh with the start parameter:
./modelersrv.sh start
Sample result
Example 4-5 shows the result.
Example 4-5 Result
/usr/IBM/SPSS/ModelerServer/17.0/modelersrv.sh start
IBM SPSS Text Analytics Server is already running
IBM SPSS Modeler Server started
How to stop Modeler Server
To stop the Modeler Server, execute modelersrv.sh with the stop parameter:
./modelersrv.sh stop
Sample result
Example 4-6 shows the result.
Example 4-6 Result
./modelersrv.sh stopIBM SPSS Text Analytics Server stopped
IBM SPSS Modeler Server stopped
Sample result
Check whether Modeler Server is up and running with the command that is shown in Example 4-7.
Example 4-7 Check whether Modeler Server is running
/usr/IBM/SPSS/ModelerServer/17.0/modelersrv.sh list
PID PPID USER VSZ PCPU COMMAND
3014978 1 root 19196 0.0 /usr/IBM/SPSS/ModelerServer/17.0/mo delersrv_17_0 -server
 
If the Modeler Server is not launched the result is
PID PPID USER VSZ PCPU COMMAND
4.4.7 Install and configure IBM SPSS Collaboration and Deployment Service
This section describes the installation and configuration steps for SPSS Collaboration and Deployment Service.
Install
Follow these steps to deploy the SPSS Collaboration and Deployment Service:
1. Click Install.
2. Select the product that you want to install from the Installation Packages window (Figure 4-69 on page 109). Select IBM SPSS Collaboration and Deployment Service - Repository Server. After you select the product, the status changes to Will be installed. Click Next.
3. Select I accept the terms in the license agreement. Click Next.
4. Click Next.
5. Select the path to install IBM SPSS Collaboration and Deployment Service.
 
Important: If files are in the target directory, Installation Manager will not proceed with the installation. So, you need to delete the existing directory.
6. Click Next.
7. Scroll down to the bottom of the window. Check whether the required amount of space is available for the installation. Click Next.
8. Click Install.
 
Important: If files exist in the target directory, Installation Manager will not proceed with the installation so you need to delete the existing directory.
For the installation, both the usr and opt directories are used. You must check whether both directories have enough space for the installation. If you do not have enough space for the opt directory, you might see the following message:
“cannot open http:/localhost:9080/DM But you can open http:/localhost:9080/config
http:/localhost:9080/peb etc”
The possible cause of this issue is a lack of disk space. The deployment file was not generated correctly or the deployment file was not deployed to WebSphere correctly.
9. Wait until the installation completes.
10. Check View Log Files to see whether any errors are logged.
11. See Figure 4-69. Click IBM SPSS Collaboration and Deployment Services Configuration Tool for “Which program do you want to start?” and click Finish.
Figure 4-69 Installation Manager Install Packages window
Configure
This section describes how to configure the software.
 
Note: After you click Finish on the Install Packages window, the installation wizard starts.
Follow these steps:
1. In our demonstration environment, we used the following settings:
a. Application Server type: IBM WebSphere
b. Path or folder name for the WebSphere profile directory (Figure 4-70): /usr/IBM/WebSphere/AppServer/profiles/AppSrv01/
Figure 4-70 Selecting the WebSphere profile
c. User name: admin
d. Password: ibm1ibm
2. Figure 4-71 shows the SPSS Collaboration and Deployment Services Configuration Tool window.
Figure 4-71 SPSS Collaboration and Deployment Services Configuration Tool window
3. Figure 4-72 shows the SPSS Collaboration and Deployment Services Configuration Tool for the application server options. Click Next.
Figure 4-72 IBM SPSS Collaboration and Deployment Configuration Tool: Application Server options
4. Check the database name that is created for Collaboration and Deployment Service and click Next. In this demonstration, we set the following information (Figure 4-73):
 – Database type: IBM DB2
 – Host name: dn05
 – Port: 50000
 – Database name: SPSSCDS
 – User name: db2inst1
 – Password: ibm1ibm
Figure 4-73 Configuration tool: Database server information
 
 
5. Select Erase any existing data, as shown in Figure 4-74. Click Next.
Figure 4-74 Configuration tool: Erase existing data
6. Type the password for encryption. For this case, we entered ibm1ibm. See Figure 4-75. Click Next.
Figure 4-75 Configuration tool: Entering the encryption password
7. Set the ID and password for the repository administrator (Figure 4-76) and click Next. We entered the following information:
 – Repository administrator: admin
 – Password: ibm1ibm
 – Confirm password: ibm1ibm
Figure 4-76 Configuration Tool: Setting up the repository administrator
8. Select Automatic for the deployment mode (Figure 4-77) and click Next.
 
Note: If you select Manual mode, you must deploy your changes to WebSphere manually.
Figure 4-77 Configuration tool: Selecting the deployment mode
9. Click Configure on the Configuration summary (Figure 4-78).
Figure 4-78 Configuration tool: Configuration summary
 
 
Note: To monitor the status, it might be a good idea to check the status of the database. Even if the indicator looks frozen, Collaboration and Deployment Service might be trying to communicate with the repository database.
10. Click Finish.
11. After the installation completes, check the log file by using the View Log files option. The actual log files are stored by default in the /usr/ibm/installdata_for_im/logs directory.
4.4.8 Install and configure IBM SPSS Modeler Server Adapters for Collaboration and Deployment Services
This section describes the installation steps.
Install
Follow these steps:
1. Click Install.
2. Select the product that you want to install from the list of Installation Packages. Select IBM SPSS Modeler Adapters for Collaboration and Deployment Services. After you select it, the status changes to Will be installed. Click Next.
3. Select I accept the terms in the license agreement and click Next.
4. Click Next.
5. Select the path to install IBM SPSS Modeler Adapters for Collaboration and Deployment Services and click Next.
6. Scroll down to the bottom of the window. Check whether the required amount of space is available for the installation. Click Next.
7. In the Common Configurations section on the Features tab of the Install Packages page, type the user and password that you set for the Collaboration and Deployment Service - Repository Server. For example, we used admin for the user and ibm1ibm for the password, as shown in Figure 4-79. Click Next.
Figure 4-79 Install Packages window: Entering the user ID and password
8. Check whether enough disk space exists for the installation. If enough space is available for the installation, click Install.
9. Click OK.
10. After the installation completes, check the log file by clicking View Log files. The actual log files are stored by default in the /usr/ibm/installdata_for_im/logs directory.
11. Click Finish.
4.4.9 Install IBM SPSS Analytical Decision Management
This section provides the steps to install the IBM SPSS Analytics Decision Management component:
1. Click Install.
2. Select the product that you want to install from the Installation Packages window. Select IBM SPSS Analytical Decision Management. Click Next.
3. Scroll down to the bottom of the window. Select I accept the terms in the license agreement. Click Next.
4. Check whether enough space exists for the required space for the installation. Click Next.
5. Use the default options and click Next on the Features tab on the Install Packages window.
6. On the Install Packages page, set the user and password that you set for the Collaboration and Deployment Service - Repository Server. For example, we used admin for the user and ibm1ibm for the password.
7. Verify that enough disk space is available for the installation of IBM SPSS Analytical Decision Management. Click Install.
8. After the installation completes, check the log file by clicking View Log files.
9. The actual log files are stored by default in the /usr/ibm/installdata_for_im/logs directory.
4.4.10 Install SPSS Collaboration and Deployment Service
Follow these steps to install SPSS Collaboration and Deployment Service:
1. Extract spss_cnds_depmgr_64b_7.0_win_mlzip.
2. Navigate to the spss_cnds_depmgr_64b_7.0_win_mlDeployment_Manager_64install.exe file location.
3. Right-click install.exe.
4. Select Run as Administrator.
5. If “This file is an untrusted location. Are you sure you want to run it?” appears, click Yes (Figure 4-80).
Figure 4-80 Windows User Account Control for the deployment manager installation
6. Log in with your ID and password. Wait until the process completes.
7. Select English and click OK.
8. Click Next.
9. Select I accept the terms in the license agreement and click Next.
10. Change the installation path if necessary. Click Next.
11. Accept the defaults. Click next.
12. Click Install.
13. Click Done.
4.5 Cognos for Dashboarding
In our demonstration, we install Java, DB2 client, OpenLDAP, and Apache HTTP server to support the Cognos Business Intelligence version 10 installation. At the time of writing this publication, Cognos Business Intelligence is supported for Linux on POWER8, but check that your environment is big endian.
4.5.1 Install IBM Java SDK 6.0
Follow these steps:
1. Download IBM Java SDK 6.0. Install it by running ibm-java-sdk-6.0-16.7-linux-ppc64.bin, which is an X application. Check that your environment is set correctly to X11 forwarding in the server and client side as shown in Figure 4-81. Read the license agreement carefully, accept the terms, and click Next.
Figure 4-81 IBM Java SDK 6.0 installation window
2. On the Introduction window, we strongly suggest that you close all programs before you continue with this installation. Click Next.
3. In the Choose a destination folder section, we suggest that you use /usr/java/ibm-java-ppc64-60 as a Java binaries directory. You can change it according to your environment’s or organization’s standards. Click Next.
4. On the Pre-Installation Summary window, review the information and click Install.
5. The installation might take a few seconds. Click Done to conclude the IBM Java SDK 6.0 installation.
4.5.2 Install DB2 client
Before you install DB2 client, install pam.ppc for 32 bits and install vacpp, as shown in Figure 4-82.
yum install pam.ppc
 
yum install vacpp
Figure 4-82 yum install for pam.pcc and vacpp
Download and extract the IBM DB2 client installations. In our scenario, we used the full server installation from the DB2_V10.5_ltd_CD_Linux_ipSeries.tar file. Run ./server/db2setup from the extracted directory and wait a few seconds to open the X Window System for installation.
On the next window, click Install a product and scroll down to click Install New for IBM Data Server Client Version 10.5, as shown in Figure 4-83.
Figure 4-83 Install IBM Data Server Client Version 10.5
See Figure 4-84. Follow these steps. On the left side of the window, select these options:
1. Click 1. Introduction. On the menu option 1. Introduction, click Next.
2. Click 2. Software License Agreement. Read the software license agreement carefully and click Next.
3. Click 3. Installation type. We strongly encourage you to select the typical installation, but you can customize the installation according to your environment’s or organization’s standards.
4. Click 4. Installation action. Keep the default settings to create a response file for future reference, and click Next.
5. Click 5. Installation directory. We suggest that you to keep the default installation directory, which is /opt/ibm/db2/V10.5, but you can modify it.
6. Click 6. Instance setup. Create a DB2 instance. An instance is required to use the product. Click Next.
7. Click 7. Instance-owning user. We encourage you to use the default settings, but you can change them, as shown in Figure 4-84. Click Next.
Figure 4-84 Set user information for the DB2 instance owner for DB2 client
8. Review your installation settings by clicking 8. Summary. Click Finish to start the installation, which takes a few minutes.
4.5.3 Install Apache HTTP server 2
Follow these steps:
1. Apache HTTP server version 2 is supported by Cognos Business Intelligence Version 10. Download and extract the httpd-2.2.31.tar file. Go to your installation directory and type the tar xvf httpd-2.2.31.tar command to extract the tar file, as shown in Figure 4-85.
tar xvf httpd-2.2.31.tar
Figure 4-85 Untar Apache HTTP server installation
2. In the extracted httpd-2.2.31, run the configure command. We suggest that you use /usr/local/apache2 for the installed Apache HTTP server directory, as shown in Figure 4-86.
./configure --prefix=/usr/local/apache2
Figure 4-86 Configure Apache HTTP server
3. Run the make and make install commands in the same directory where you performed the configuration to conclude the installation, as shown in Figure 4-87.
make
 
make install
Figure 4-87 Using the make and make install commands
4.5.4 Install and configure OpenLDAP
Follow these steps:
1. You can use the LDAP server that is deployed in your organization or install a new LDAP database, as demonstrated in Figure 4-88. Install the openldap-servers and the openldap-clients packages.
yum install openldap-servers
 
yum install openldap-clients
Figure 4-88 yum install openldap-servers and openldap-clients
2. Hash a new root password for your environment by using the slappasswd command. Save the hashed password string for later usage in your own configuration files.
3. Figure 4-89 shows the configuration for the /etc/openldap/slapd.conf that is used in our environment, which you can use for your own configuration. Ensure that you replace the rootpw hashed password string with your own slappasswd output that was saved in the previous step.
include /etc/openldap/schema/corba.schema
include /etc/openldap/schema/core.schema
include /etc/openldap/schema/cosine.schema
include /etc/openldap/schema/duaconf.schema
include /etc/openldap/schema/dyngroup.schema
include /etc/openldap/schema/inetorgperson.schema
include /etc/openldap/schema/java.schema
include /etc/openldap/schema/misc.schema
include /etc/openldap/schema/nis.schema
include /etc/openldap/schema/openldap.schema
include /etc/openldap/schema/ppolicy.schema
include /etc/openldap/schema/collective.schema
 
allow bind_v2
 
pidfile /var/run/openldap/slapd.pid
argsfile /var/run/openldap/slapd.args
 
#######################################################################
# ldbm and/or bdb database definitions
#######################################################################
 
database bdb
suffix "dc=cognos-test,dc=org"
checkpoint 1024 15
rootdn "cn=admin,dc=cognos-test,dc=org"
rootpw {SSHA}TdrqvQFyW50NcvZJsfojdZFakOOEm/j+
 
directory /var/lib/ldap
Figure 4-89 The slapd.conf example file
4. As you can see from the slapd.conf example (Figure 4-89), the defined database suffix is "dc=cognos-test,dc=org" and the rootdn is "cn=admin,dc=cognos-test,dc=org".
5. Those elements must be created for the initial domain edit. Save a domain.ldif, as shown in Example 4-8.
Example 4-8 The domain.ldif example file
dn: dc=cognos-test, dc=org
dc: cognos-test
o: My Cognos Test
objectclass: top
objectclass: organization
objectclass: dcObject
 
dn: cn=admin,dc=cognos-test,dc=org
userPassword: ibm1ibm
sn: Administrator
cn: Administrator
objectClass: top
objectClass: person
objectClass: organizationalPerson
objectClass: inetOrgPerson
6. Run the ldapadd command, as shown in Figure 4-90, to create the initial domain.
ldapadd -x -W -D "cn=admin,dc=cognos-test,dc=org" -f domain.ldif
Figure 4-90 The ldapadd command to create the LDAP domain
7. You can also create additional users. For example, to add a user that is named adam, create an adam.ldif file, as shown in Example 4-9.
Example 4-9 New user adam.ldif file
dn: cn=adam,dc=cognos-test,dc=org
userPassword: ibm2ibm
sn: adam
cn: adam
objectClass: top
objectClass: person
objectClass: organizationalPerson
objectClass: inetOrgPerson
8. Run the ldapadd -x -W -D "cn=admin,dc=cognos-test,dc=org" -f adam.ldif command that is shown in Figure 4-91 to add a user, for example, adam, to the LDAP directory.
ldapadd -x -W -D "cn=admin,dc=cognos-test,dc=org" -f adam.ldif
Figure 4-91 Adding a user that is named adam to the LDAP directory
4.5.5 Configure Cognos Business Intelligence
Follow these steps:
1. Before you install Cognos Business Intelligence, check that motif and libgcc are both installed in 32 bits and 64 bits, as shown in Figure 4-92.
yum install motif.ppc
yum install motif.ppc64
yum install libgcc.ppc
yum install libgcc.ppc64
Figure 4-92 Install motif and libgcc for both 32 bits and 64 bits
2. Download and extract the bi_svr_10.2.2_lxp_ml.tar file. Run ./linuxppc64h/issetup and the X Window System setup, as shown in Figure 4-93.
Figure 4-93 Cognos Business Intelligence Installation Wizard
3. Select the language that you want and click Next.
4. In the IBM license agreement, read the terms carefully, check I agree, and click Next.
5. On the Installation Location window, change to the installation directory that you want or keep the default. Click Next.
6. Select all of the available components to install under IBM Cognos Business Intelligence Server and click Next.
7. Review the installation summary and click Next to start the installation. The installation progresses for a few minutes. The Finish window opens.
8. Change your environment variables according to your environment. Example 4-10 shows our particular environment variables.
Example 4-10 Environment variables for Cognos
export JAVA_HOME=/usr/java/ibm-java-ppc64-60
export DB2PATH=/opt/ibm/db2/V10.5
export LD_LIBRARY_PATH=$DB2PATH/lib32:$DB2PATH/opt/ibm/db2/V10.5/lib64:/opt/ibm/cognos/c10_2_2_64/cgi-bin:/opt/ibm/cognos/c10_2_2_64/cgi-bin/lib:/opt/ibm/cognos/c10_2_2_64/bin64
9. We suggest that you save your own environment settings to your Cognos owner profile.
 
Note: Check whether your firewall setting prevents other nodes from reaching your Cognos nodes. If you want to temporarily disable your firewall for the installation, run the following commands:
systemctl stop firewalld.service
systemctl disable firewalld.service
4.5.6 Configure the Apache HTTP Server for Cognos
Follow these steps:
1. Edit your /usr/local/apache2/conf/httpd.conf file to add the entries that are shown in Figure 4-94. Replace <cognos_installation_directory> with the correct settings for your environment.
ScriptAlias /ibmcognos/cgi-bin "<cognos_installation_directory>/cgi-bin"
Alias /ibmcognos "<cognos_installation_directory>/webcontent"
 
<Directory "<cognos_installation_directory>/cgi-bin">
Options None
AllowOverride None
Order allow,deny
Allow from all
</Directory>
 
<Directory "<cognos_installation_directory>/webcontent">
Options Indexes MultiViews
AllowOverride None
Order allow,deny
Allow from all
</Directory>
Figure 4-94 Additional configuration for Cognos on Apache HTTP Server
 
Note: You can run Apache HTTP Server under a different credential than apache. For example, edit the http.conf file to change the credentials to the user and group that you want:
#
# If you want HTTPd to run as a different user or group, you must run
# httpd as root initially and it will switch.
#
# User/Group: The name (or #number) of the user/group to run HTTPd as.
# It is usually a preferred practice to create a dedicated user and group for
# running httpd, as with most system services.
#
User nobody
Group nobody
2. Run the apache configtest command to check the httpd.conf file syntax, as shown in Figure 4-95.
/usr/local/apache2/bin/apachectl configtest
Figure 4-95 apachectl configtest
3. Fix any syntax errors and start the Apache HTTP Server service, as shown in Figure 4-96.
/usr/local/apache2/bin/apachectl -f /usr/local/apache2/conf/httpd.conf
Figure 4-96 apachectl start daemon
4.5.7 Copy DB2 client drivers to Cognos libraries
Follow these steps:
1. Copy the db2jcc files to the webapps library directory.
2. If you selected the default DB2 installation, those files are at /opt/ibm/db2/V10.5/java/db2jcc.jar and /opt/ibm/db2/V10.5/java/db2jcc_license_cu.jar.
3. Copy these files to your <cognos_installation_directory>/webapps/p2pd/WEB-INF/lib/.
4.5.8 Apply Cognos fix packs
This step is not optional for our demonstration because a few required features for our scenario are provided in Fix Pack 2 for Cognos Business Intelligence Server 10.2.2.
Follow these steps:
1. Stop the Cognos Business Intelligence Server with the command that is shown in Figure 4-97.
<cognos_installation_directory>/bin64/cogconfig.sh -stop
Figure 4-97 Stop Cognos Business Intelligence Server
2. Download and extract Fix Pack 2. Run ./linuxppc64h/issetup. Follow the similar procedure that is described in 4.5.5, “Configure Cognos Business Intelligence” on page 127.
3. Start the Cognos Business Intelligence Server in silence mode with the command that is shown in Figure 4-98.
<cognos_installation_directory>/cogconfig.sh -s
Figure 4-98 Start Cognos Business Intelligence Server
4.5.9 Install Framework Manager
To publish a model to the Cognos portal, you need Framework Manager. Follow these steps to install Framework Manager:
1. Extract fm_10.2.2_win_ml.tar.gz to fm_10.2.2_win_ml.
2. Navigate to the \fm_10.2.2_win_mlwin32issetup.exe file location.
3. Right-click issetup.exe.
4. Select Run as Administrator.
5. Click English. Click Next.
6. Click I Agree. Click Next.
7. Use the default setting. Ensure that you set the server use type to Production. Click Next.
8. If the message “The directory <installpath> does not exist. Do you want to create it during installation?” displays, answer Yes (Figure 4-99). Click Next.
Figure 4-99 Message about creating a folder during the Framework Manager installation
9. Click Next.
10. Wait until the installation completes.
11. Select Start → All Programs → IBM Cognos.
12. Start Cognos Configuration.
13. Select Local Configuration → Environment.
14. Use the following settings:
 – For Gateway settings > Gateway URI:
http://servername:80/cognos/cgi-bin/cognos.cgi
 – For other URI settings > Dispatcher URI for external application:
http://servername:80/p2pd/servlet/dispatch
15. Click File Save.
4.5.10 Apply Cognos Fix Packs for client
This step is not optional for our demonstration because a few required features for our scenario are provided in Fix Pack 2 for Cognos Business Intelligence Server 10.2.2. If you apply the fix pack to the server, you must apply Fix Pack 2 to your Framework Manager, too.
Follow these steps:
1. Close Framework Manager.
2. Extract up_bisrvr_win32_10.2.6102.54_ml from up_bisrvr_winx64h_10.2.6102.54_ml.tar.
3. Navigate to the locationup_bisrvr_win32_10.2.6102.54_ml.tarup_bisrvr_win32_10.2.6102.54_mlwin32 file.
4. Right-click issetup.exe.
5. Select Run as Administrator.
6. Click English. Click Next.
7. Click I Agree. Click Next.
8. Use the default setting. Ensure that you set the server use type to Production. Click Next.
9. If the message “The directory <installpath> does not exist. Do you want to create it during installation?” displays, answer Yes (Figure 4-99 on page 131). Click Next.
10. Click Next.
11. Wait until the installation completes.
12. Select Start → All Programs → IBM Cognos.
13. Start Cognos Configuration.
14. Select Local Configuration → Environment.
15. Use the following settings:
 – For Gateway settings > Gateway URI:
http://servername:80/cognos/cgi-bin/cognos.cgi
 – For other URI settings > Dispatcher URI for external application:
http://servername:80/p2pd/servlet/dispatch
16. Click File Save.

1 GPFS Release for Enterprise Manager from http://ibm.co/1RYHeG1
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.144.237.122