Temenos Deployment on IBM LinuxONE and IBM Public Cloud
This chapter provides an overview of a sample installation, deployment, tuning, and migration journey for Temenos on IBM LinuxONE.
The following topics are covered in this chapter:
The Temenos Runbook architecture described in this chapter is based on Stack 2, IBM Java 1.8, IBM MQ, WebSphere, and Oracle DB 18c. See Figure 4-1 on page 86.
Figure 4-1 Stack 2 architecture used in this section. 1
The standard solution that is presented in Figure 4-2 allows you to build a strong foundation for the future. This solution gives the customer the ability to provide maintenance to the base infrastructure with minimal impact to production. Building on the standard solution provides a pathway for the customer to continuous availability with other IBM products like GDPS and other storage mirroring solutions. Figure 4-2 shows the overall deployment architecture for a standard Temenos Solution on IBM LinuxONE Systems. The orange box represents the IBM LinuxONE III CPC.
Figure 4-2 Standard Temenos Solution on IBM LinuxONE Systems.
When deploying any production hardware or application, it is important to ensure that there is no single point of failure. Considering this, always plan to have at least two or more of: hardware, Linux systems, applications systems, network equipment and connections, storage infrastructure and so on.
4.1 The installation journey for the IBM LinuxONE hardware
IBM engineers issue a code 20 after they have completed unpacking, assembling, connecting power, and the initial power-up and general diagnostic testing for a new IBM system. When the system is classified as code 20, the machine's warranty start date is set and the system is considered yours.
Working together (the customer and the IBM engineer), the I/O configuration and logical partitions layout is created. The Input Output Definition File (IODF) configures the IBM LinuxONE server hardware and defines the partitions (LPARs) on the IBM LinuxONE. IBM Engineers use the IODF to create the mapping of where each I/O cable is to be plugged into the IBM LinuxONE.
Each LPAR is set up with real memory layout and the number of IFLs assigned to the LPAR. Working together, you and the IBM Team set up the system using IBM LinuxONE best practices and IBM LinuxONE and Temenos recommendations.
The following sections give a visual perspective and a high-level overview of the installation journey.
4.1.1 Sandbox LPARs - Sandbox environment
Figure 4-3 on page 88 shows the Sandbox environment.
Figure 4-3 Sandbox environment.
We are now ready to install the Sandbox LPAR systems. IBM Hypervisor (IBM LinuxONE z/VM) is the first operating system that is made operational. These Sandbox systems are used to provide training and help in verifying that all network connections and hardware connections are working correctly. Each LPAR is set up as a Single System Image (SSI), so they are the same as all the other IBM LinuxONE z/VM LPARs. This provides the foundation where future IBM LinuxONE z/VM maintenance and upgrades are installed. This Sandbox environment provides assurance that the hardware and operating system are not negatively impacted when maintenance or upgrades are applied in the production environment. In addition, your first virtual Linux systems will be installed in the Sandbox environment, as should all Linux patches for the verification reason previously stated.
4.1.2 Development and Test environment
Figure 4-4 shows the Development and Test environment.
Figure 4-4 Development and Test environment.
The Development and Test environment defines the first set of LPARs used to create or develop Temenos and IBM LinuxONE systems on this platform. The Development or Test environment consists of four LPARs within a single SSI cluster with each LPAR running an IBM LinuxONE z/VM Hypervisor.
Two LPARs run the Temenos Application software, web services, and any other non-database software. The virtual Linux guests, running on these two LPARs, have many version or levels of applications and Linux operating systems installed on them. It is only on these virtual Linux guests where development and initial testing should occur.
The other two LPARs run both core and non-core banking databases. One of the benefits of running these databases only on these two LPARs is a reduction in database licensing costs. The segregation of development or test databases to their own two LPARs ensures that application development processes (running on the other Development or Test LPARs) can proceed unaffected by database workloads.
4.1.3 Pre-Production environment
Figure 4-5 on page 90 shows the Pre-Production environment.
Figure 4-5 Pre-Production environment.
Within the systems environment hierarchy, the Pre-Production environment is second only to the Production environment. Pre-production systems provide for last chance verification of how any changes might affect production. This is a set of systems that mimic the real production environment. It is in this environment that errors or performance issues due to changes or updates can be caught. It is important that this environment is set up to replicate production as closely as possible.
In Figure 4-5, the Pre-Production environment has the following configuration:
Two LPARs running the core banking databases
Another two LPARs running non-core banking databases
Another four application LPARs that are within a single SSI cluster
This configuration matches the Production environment setup also shown in Figure 4-5.
4.1.4 Production LPARs environment
Figure 4-6 on page 91 shows the Production environment.
Figure 4-6 Production LPARs environment.
The work that has been done setting up Sandbox, Development and Test, and Pre-Production systems provides the foundation to create a Production environment that can take advantage of the IBM LinuxONE.
Clustering the virtual banking application Linux guests across four LPARs allows room for each LPAR to grow when workload increases. The database servers are split between core banking and non-core banking databases. This split of the databases provides savings in software licensing costs.
4.1.5 Disaster recovery
Figure 4-7 on page 92 shows a high-level storage disaster recovery layout.
Figure 4-7 High level storage and disaster recovery layout.
IBM LinuxONE has a unique disaster recovery capability. Architecturally, every IBM LinuxONE is engineered to strict standards which ensures that no IBM LinuxONE differs architecturally from another; no matter the version. Because they are architecturally identical, any virtual Linux guest from any LPAR or IBM LinuxONE can run on any other LPAR or IBM LinuxONE as long as it has access to the same network, storage, or copy of the storage. That means no changes are needed for any virtual or native Linux guests to run on another IBM LinuxONE CPC. This portability does not exist on any other hardware platform. Therefore, any Linux guest, native or virtual, can be interchanged seamlessly with another IBM LinuxONE CPC.
The practice of instantaneous data storage mirroring between production and DR sites ensures any change, modification, or update applied in production to Linux guests is automatically replicated on the DR site.
Capacity Backup (CBU) processors are another unique and cost advantageous feature of the IBM LinuxOne offering. CBUs are processors that are available only on the DR system and are priced at a lower cost than production processors. These DR CBU CPCs are based on the permanent production configuration. They are not active while your production CPC is operational. As such, IBM software licensing and requisite fees apply only to those processors that are active (based on the CPC permanent configuration).
There can be additional fees for non-IBM software. In addition, some non-IBM software packages can require new license keys to take advantage of the additional capacity. Check with your software vendor for details.
Figure 4-8 shows how disaster recovery (DR) matches the production environment.
Figure 4-8 DR matching a production environment.
Disaster recovery (DR) CPCs match the production environments. This includes the number of processors, memory, network, and I/O configuration. You can design the DR site to handle only the production workload or you can build the DR site to handle both production and non-production workloads.
Figure 4-9 on page 93 shows the Sandbox flexibility in the DR environment.
Figure 4-9 Flexible Sandbox capabilities in the DR environment.
The CPCs for each DR site have a small active LPAR (Sandbox). This LPAR is available to the support teams for test purposes to verify whether the network, mirrored storage, and the CPCs are ready to handle a disaster recovery.
Figure 4-10 on page 94 shows the process of engaged disaster recovery.
Figure 4-10 Disaster recovery process.
With this type of disaster recovery setup, a runbook can be created documenting the steps to transfer Production to the DR site. This runbook allows anyone in the Support Team structure to execute the process. It can be as simple as the process shown in the following steps:
1. Verify that all Production LPARs are down
2. Activate DR Production LPARs
3. Bring up DR Production systems (IPL systems)
4. Verify DR production virtual servers are active and ready to accept workloads
 
IMPORTANT: For DR site planning and setup purposes, all non-IBM equipment and workloads running in Production should also be replicated on the Disaster Recovery site. This ensures a complete and seamless recovery process.
Figure 4-11 shows the disaster recovery process with IBM GDPS Virtual Appliance.
Figure 4-11 IBM GDPS Virtual Appliance.
IBM GDPS Virtual Appliance (IBM GDPS (VA)) is designed to facilitate near-continuous availability and disaster recovery by extending GDPS capabilities for IBM LinuxONE. It substantially reduces recovery time and the complexity associated with manual disaster recovery.
Virtual Appliance requires its own LPAR with a dedicated special processor.
4.2 Tuning
This section contains the Linux and Java tuning considerations to optimize Temenos Transact and its dependent software on IBM LinuxONE.
4.2.1 Linux on IBM LinuxONE
This section talks about IBM LinuxONE specifics for the Linux operating system.
Huge pages
Defining large frames allows the operating system to work with memory frames of 1 MB rather than the default 4 K. This allows smaller page tables and more efficient Dynamic Address Translation. Enabling fixed large frames can save CPU cycles when looking for data in memory. Disable the transparent huge pages (enabled per default) to ensure that the 1 MB pool is assigned when specified. Transparent huge pages tries to assign 2 MB pages until enough contiguous memory is available. The longer that the system is running, this effectiveness can diminish and the more that memory fragmentation occurs. Large page support entails support for the Linux hugetlbfs file system. To check whether 1 MB large pages are supported in your environment, issue the following command:
grep edat /proc/cpuinfo
features : esan3 zarch stfle msa ldisp eimm dfp edat etf3eh highgprs te
An output line that lists edat as a feature indicates 1 MB large page support.
Defining huge pages with those kernel parameters allocates the memory as part of a pool. To monitor the pool usage, the information can be found using the cat /proc/meminfo output. The two components that can benefit from the large frames are Java and Oracle.
See the USE_LARGE_PAGES initialization parameter in the Oracle documentation to activate huge pages in the database.
The following kernel parameter in /etc/zipl.conf enables 1 MB large frames:
transparent_hugepage=never default_hugepagesz=1M hugepagesz=1M hugepages=<number of pages allocated at boot>
Calculate the number of pages according to the application requirements. The number can be about three/fourths (3/4) of the memory to the instance.
4.2.2 JAVA virtual machine tuning
The IBM Java 1.8 package is the certified Java distribution for IBM LinuxONE with Temenos. Also, the IBM Java 1.8 JDK provides JIT compiler, which has shown in our lab environment to have a positive performance impact.
JVMs or Logical IFLs
In our lab, it was noticed that a 1-1 allocation of JVMs to physical IFLs was a sweet spot when tuning the Transact application for maximum throughput. Considerations must be made based on the transaction mix for your environment.
Shared class cache
Enable the use of a shared class cache between JVMs for AIT and JIT information.
Heap size
Set the minimum (option -Xms) and the maximum (option -Xmx) JAVA heap size as the same. This ensures the heap size does not change during run time. Make the heap size large enough to accommodate the requirements of your applications but small enough not to impact performance. A heap size that is too large can also impact performance. You can run out of memory or increase the time that it takes the system to clean up unreferenced objects in the heap. This process is known as Garbage Collection.
IBM LinuxONE III Integrated Accelerator for zEDC
This is a transparent exploitation of an integrated on-chip accelerator with no required setup. The prerequisite for using it is IBM Java 8 SR6.
Pause-less garbage collection
Pause-less Garbage Collection (GC) is a new GC mode in the 64-bit IBM SDK for Java 8 SR5. Its purpose is to reduce the impact of GC stop-the-world phases and improve the throughput and consistency of response times for Java applications. This technology leverages the new Guarded Storage Facility in IBM LinuxONE hardware to allow additional parallel execution of GC-related processing with application code. Pause-less Garbage Collection is particularly relevant for applications with strict response time Service Level Agreements (SLAs) or large Java heaps.
As seen in Figure 2-6 on page 25, the time where the program threads need to stop (during garbage collection) is massively reduced with the use of the guarded storage facility.
The Pause-less GC mode is not enabled by default. To enable the new Pause-less GC mode in your application, introduce -Xgc:concurrentScavenge to the JVM options.
Large pages
JVM "-Xlp" startup-option is set in the WebLogic Application server for the Temenos Transact application. This setting indicates that the JVM should use Large pages for the heap. If you use the SysV shared memory interface, which includes java -Xlp, you must adjust the shared memory allocation limits to match the workload requirements.
Garbage Collection Policy gencon
JAVA 7 gencon is the default policy. This policy introduces a nursery area where short living objects are placed. The default nursery area is relatively small. If your system needs to carry more objects, make that area larger.
Red Hat and Security Patches
Red Hat has made updated kernels available to address a number of security vulnerabilities. These patches are enabled by default because Red Hat prioritizes ready for use security. Speculative execution is a performance optimization technique which these updates change (both kernel and microcode) and can result in workload-specific performance degradation.
Customers who feel confident that their systems are well protected might want to disable some or all of the protection mechanisms. For more information about controlling the impact of microcode and security patches, read the following Red Hat article, which describes the vulnerabilities patched by Red Hat and how to disable some or all of these mitigations:
4.3 Migrating Temenos from x86 to IBM LinuxONE
This section describes how we addressed some limitations of a Temenos application stack deployed on x86 by updating the architecture to the IBM LinuxONE and using its advantages.
The starting point
Our hypothetical initial installation used several x86 servers to implement database and application tiers of the solution. Key aspects to this installation included the following concepts:
Large number of physical servers to be maintained
Gaps in availability coverage due to poor manageability of virtual instances
Physical connectivity (SAN, network) requirements
Large number of virtual instances to operate and maintain
Step 1: IBM LinuxONE hardware
Having a large number of x86 servers is administratively burdensome and takes a large amount of data center resources (such as floor space, power consumption, networking and storage ports, cooling effort, and so on). In addition, software licensing is often based on server physical cores so the amount of x86 capacity required for a given workload can have significant licensing impact.
IBM LinuxONE addresses this by consolidation of the many x86 servers to two IBM LinuxONE servers. This provides a reduction in the physical server count and a reduction in the connectivity requirements.
As discussed previously, it might be possible to use a single IBM LinuxONE server. However, using two IBM LinuxONE servers provide greater flexibility in managing situations that require a server to be removed from service temporarily.
Step 2: Hypervisor
We use z/VM as the hypervisor, using the SSI function. This improves the manageability of virtual instances by eliminating the need to synchronize configuration details between shadow virtual instances. It also offers easier options for local recovery of virtual instances (restart on the same IBM LinuxONE server or restart on the other one) in the event of a restart being needed.
Running the members of the SSI cluster across the two IBM LinuxONE servers provides the maximum flexibility.
Step 3: Linux virtual instances
Rather than simply re-creating each virtual instance from the x86 environment, we use the superior vertical scalability of the IBM LinuxONE server and the z/VM hypervisor. This reduces the total number of virtual instances.
z/VM also allows a high degree of horizontal scalability by supporting large numbers of virtual instances per system. This provides the option of adjusting the number of instances to make sure that there were enough to prevent a noticeable impact to operation. For example, if a virtual instance needed to be removed from the environment for maintenance or in the event of a failure.
Step 4: Java
Migrating Java applications from one platform to another is easy compared to the migration effort required for C or C++ applications. Even though Java applications are operating system independent, the following implementation and distribution specifics need to be considered:
Most of the Java distributions have their own Java virtual machine (JVM) implementations. There will be differences in the JVM switches. These switches are used to make the JVM and the Java application run as optimally as possible on that platform. Each JVM switch that is used in the source Java environment needs to verify for a similar switch in the target Java environment.
Even though Java SE Developer Kits (JDKs) are expected to conform to common Java specifications, each distribution will have slight differences. These differences are in the helper classes that provide functions to implement specific Java application programming interfaces (APIs). If the application is written to conform to a particular Java distribution, the helper classes referenced in the application must be changed to refer to the new Java distribution classes.
Special procedures must be followed to obtain the best application migration. One critical point is to update the JVM to the current stable version. The compatibility with earlier versions is significant and performance improvements benefit applications.
Ensure that the just-in-time (JIT) compiler is enabled.
Set the minimal heap size (-Xms) equal to the maximal heap size (-Xmx). The size of the heap size should always be less than the total of memory configured to the server.
Step 5: IBM WebSphere Application Server
IBM has ported many of its software products to IBM LinuxONE. The benefit to customers is that a migration from one platform to another is, in many cases, effortless. This is because many of these products share their code base across multiple platforms. This benefit is particularly the case for IBM WebSphere Application Server, which from Version 6, has had the same code base on Intel x86 and IBM LinuxONE. Thus simplifying migration considerably. You can use deployment manager and was-agent to deploy IBM WebSphere Application Server on the new IBM LinuxONE LPARs or Linux guests under z/VM. Generally, migrating from IBM products on distributed servers to the same IBM products on IBM LinuxONE is a relatively straightforward process.
For detailed guidance on migrating IBM WebSphere Application Server see the following link:
Step 6: Oracle database
In our recommended architecture the Oracle database is deployed with the Real Application Clusters (RAC) feature. This provides a highly available database tier to the Temenos application servers.
Deploying Oracle database in a z/VM SSI environment gives some choices for how the system can be configured. Oracle RAC One Node is a configuration of Oracle specifically designed to work with virtualized environments like z/VM. It can offer most of the availability benefits of full RAC without most of the cluster overhead. It does this by sharing some of the availability responsibility with the hypervisor. For example, being able to relocate a database guest from one z/VM system to another might be enough to provide database service levels high enough for your installation.
Step 7: TAFC to TAFJ Migration
A small but significant number of Temenos clients continue to run Transact on their C-based application framework (TAFC). However, Transact versions, greater than R18, are now deployed exclusively on their Java-based application framework (TAFJ). Organizations on TAFC will need to migrate to TAFJ to run Temenos software on IBM LinuxONE. Clients planning to upgrade from Temenos releases R14 and older require a two-step upgrade approach. This approach shifts to an intermediate release that supports TAFC and TAFJ before being able to upgrade to a current TAFJ release.
A typical migration consists of running the TAFC and TAFJ environments side by side during the migration process. Then a phased approach is used to upgrade the multiple parts of the core banking solution with the least amount of impact on the core banking operations. An important consideration is to ensure that all customizations and applications support the JDBC driver for connectivity to the database. This is the only driver supported by Temenos and IBM LinuxONE. See Figure 4-12.
Figure 4-12 Migrating TAFC to TAFJ. 2
Main points of a typical Temenos TAFC to TAFJ Migration
The following steps overview the main considerations when migrating from TAFC to TAFJ:
1. Install the desired Transact TAFJ version onto the IBM LinuxONE LPAR or Guest
2. Migrate Applications and DB from x86 to IBM LinuxONE
3. Port (applicable) C applications to run on IBM LinuxONE. If necessary, update applications to use JDBC. Run the Oracle DBUpdate conversion tool to migrate the existing database schema and data to a new target version or release of the Oracle DB
4. Deploy the new TAFJ compatible version of Transact on to a new IBM WebSphere Application Server running on IBM LinuxONE
5. Upgrade any specific customized modules to support the current Transact installation on IBM LinuxONE
6. Run the DBUpdate conversion process to update the database schema and data to support the current Transact installation on IBM LinuxONE
4.4 Temenos Transact certified Cloud Native deployment for IBM LinuxONE
IBM, Red Hat, and Temenos have designed the first on-premises cloud native stack (stack 11). This stack delivers a stepping stone to cloud for clients. This allows the delivery on an on-premises private cloud based on IBM LinuxONE with Red Hat OpenShift and IBM Cloud Paks. Figure 4-13 on page 101 shows Stack 11 for Temenos Transact cloud.
Figure 4-13 Stack 11 for Temenos Transact cloud. 3
Figure 4-14 on page 102 shows one option for a cloud architecture.
Figure 4-14 Cloud architecture.
This offering provides the highest levels of security and secure data residency for your core and delivers the benefits that are inherent with cloud native architecture.
This offering also allows for the concept of Hybrid cloud. Simultaneously maintaining the core data on-premises cloud native and using IBM Cloud (or other cloud providers) in a consistent and governed manner. This is achieved through the combination of Red Hat OpenShift and IBM Cloud Paks.
A possible use case is Temenos Transact deployed on IBM LinuxONE on-premises cloud native and Temenos Infinity on IBM Hyper Protect public cloud.
4.5 Temenos deployment options on IBM Hyper Protect public cloud
Temenos and IBM have tested Transact on IBM Hyper Protect DBaaS platform running PostgreSQL. PostgreSQL will be ready and certified as a backend database by May 2020.
For more information about this platform at the following link:
To discuss public cloud options, contact the following person:
John Smith
WW Offering Manager for Temenos | Linux Software Ecosystem Team
WW Offering Management, Ecosystem & Strategy for IBM LinuxONE
 

1 Courtesy of Temenos Headquarters SA
2 Courtesy of Temenos Headquarters SA
3 Courtesy of Temenos Headquarters SA
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.138.135.80