The service provider
The service provider is the IT department in an organization. Because of the complexity of the role of service provider, this role is not likely to be performed by a single person but by a combination of people with shared understanding.
It is the responsibility of the service provider to work with the service consumers to understand their issues and requirements. After the providers understand the needs of the customers (the service consumers), they can use their own knowledge and experience from working with the resources within the organization to create a service that provides value for the service consumers.
Creating these services also requires the service provider to use coding skills to develop the service and the service infrastructure. The service provider also must understand (based on the needs of the service consumer) the capacity requirements to support a service.
This chapter describes the reasons why Walmart decided to use IBM CICS to create a caching service, the characteristics of a microservices architecture, and how the service is a cloud offering based on the five essential characteristics of cloud, as outlined by the US National Institute of Standards and Technology (NIST).
This chapter includes the following topics:
 
3.1 Why Walmart chose CICS for the caching service
Based on the requirements of the development team for a distributed caching solution, the platform engineers team began assessing several off-the-shelf solutions in software and appliance form.
They also assessed the potential of building their own caching solution. The platform engineers built a solution that was stable, cost-effective, and allowed easy integration with components of the application that were being developed by the development team. This solution was realized by using resources and the unique capabilities of the IBM z/OS and IBM CICS Transaction Server platform.
3.1.1 Mixed language support
CICS provides a highly versatile environment for hosting applications and services. The range of language support gives developers or service providers many options, but there are workload considerations that should guide the selection.
Selecting HLASM and COBOL as service development languages
The skill set of the Walmart’s software engineers who originally started the CICS based cloud services creation had some bearing on the selection of the programming languages that were used to create the services. The use of the skills and languages that the engineers were familiar with provided benefits to getting started more quickly.
There were other measurable factors that supported selecting these languages. The primary example is execution efficiency. The use of lower-level languages enables more control of the code path and resultant resource usage. Making deep execution adjustments that eliminate microseconds from a process might appear unnecessary (and can be in some use cases). However, the cumulative effect of this resource saving can become meaningful when operating at a substantial scale. At several thousand transactions per second, the difference can result in the need for an entire extra engine. At tens of millions of transactions over the course of a day, it can mean more hours of run time.
The good news is that CICS provides the flexibility to make these choices. With its support for various programming languages, the appropriate tool for virtually any job is available. Service providers can select High Level Assembler (HLASM) or C where low-level particulars or efficiency is important. Higher-level languages, such as COBOL or Java, or even scripting with REXX or PHP, can be selected where quick delivery of functions is needed.
For more information about the languages that are supported by CICS, see CICS Transaction Server for z/OS at the IBM Knowledge Center that is available at this website:
3.1.2 Everything in the box
CICS on z/OS provides a complete platform that includes all components that might be needed for hosting and managing services, along with consumable entry points into the layers of the overall stack.
Native CICS APIs
To satisfy the requirement for HTTP-based communications with the caching service (and most cloud services), the following relevant APIs were used:
EXEC CICS WEB EXTRACT: Gets HTTP information about the incoming request.
EXEC CICS WEB RECEIVE: Gets request information, such as the HTTP body, from the consumer.
EXEC CICS WEB SEND: Sends the response to the consumer
The following optional APIs proved to be useful with HTTP-based communications:
EXEC CICS WEB READ HTTPHEADER: Gets HTTP header fields and information.
EXEC CICS WEB WRITE HTTPHEADER: Creates HTTP header fields and information
The range of native APIs that are available in CICS is extensive and enables greater function and flexibility for programs that are hosted in that environment. API commands that facilitate programming functions and direct integration with hosting environment capabilities deliver a great amount of value.
z/OS APIs
The system-level APIs that are available to CICS and its hosted services further increase the value proposition. The available APIs extend the reach of functions down into the operating system and other subsystems to provide a fully realized, bottom-to-top, integrated environment that service developers can traverse.
The following APIs allow CICS to interface with system level components:
EXEC CICS VERIFY: Checks with an external security manager to authenticate users.
EXEC CICS ENQ/DEQ: It is used to manage serialization of various system resources.
EXEC CICS ASKTIME: Requests information from the sysplex timer.
Software-defined environment
When this extensive range of application, host, and system level APIs is considered (along with the virtualization of nearly every resource in the environment and built-in workload management capabilities), the result is effectively an entire software-defined environment that is available to programmers.
From application logic and CICS APIs, through dynamic hosting environment configuration with resource definition online (RDO) and system programming interface (SPI), down to manipulating low-level system resources with supervisor calls (SVC), anything is possible. These factors make z/OS and CICS a wanted platform for services.
3.1.3 Monitoring and diagnostics
With an extensive history of operational considerations for the most critical workloads, across numerous industries, and for the efficient use of precious assets, CICS and z/OS evolved with a high degree of visibility into even the most granular level of system resource usage. This visibility can be achieved through inherent components of z/OS, such as IBM Resource Measurement Facility™ (IBM RMF™) and System Management Facility (SMF), or with any number of third-party tools. Access to this information enables operational stability and supports the cloud computing characteristic of measured service as defined by NIST.
Examples of features and characteristics that are relevant to monitoring and diagnostics are provided in the following sections.
Isolated resources
Isolating resources is a key component of the z/OS architecture and enables a certain degree of assurance to individual processes that their resources are available and can be tracked in a typically highly multi-tenant environment.
Isolation can be applied at all levels of the platform stack. At the lower level, IBM Processor Resource/Systems Manager™ (IBM PR/SM™) can be used to contain hardware resources at logical partition (LPAR) boundaries. At higher levels of the stack, workloads might be isolated to particular regions and controlled at the address space level. At a more granular level, the resource consumption of individual transactions or service instances can be managed with constructs, such as transaction classes.
I/O visibility
One of the strengths of the z Systems platform is its I/O capabilities. The offload of I/O processing to dedicated processors contributes to this reputation, as does the system and software components for tracking and managing I/O, which are also important. Identifying data set level I/O rates, the amount of time waiting on a channel, the rate of cache hits on an array, or any number of other conditions is critical to maintaining a healthy I/O environment and satisfying throughput expectations.
Visibility at TCB level
Low-level processing characteristics is another attribute that is beneficial to service providers. Determining how long a task is in a task control block (TCB) or how many times a task is switching between TCBs is critical to performance and resource management in the environment.
3.1.4 Vendor environment
Although a substantial amount of visibility is delivered by default with the platform, a broad and mature vendor tools system is available for CICS and z/OS environments. Tools are available to complement almost any aspect of the platform. Some of these tools are described in this section.
Monitoring packages
Some third-party monitoring packages can take innate monitoring platform capabilities and supercharge them, which enables even more granularity and related information.
Business continuance
Concerns about processing continuance during or after disruptions are always important, even on platforms with 99.999% availability (often called five 9s platform). Meeting the service requirements of the consumer, even if there is any type of disruption, is an essential consideration; many tools are available to help with this goal.
Automation packages
Along with monitoring, automation is relevant in a consumer-oriented delivery model, and many vendors contribute with automation packages. Automation can include the following aspects of delivering a service:
Provisioning
Diagnostics
Governance
Scaling
Decommissioning
3.2 Foundational underpinnings
There are some basic underlying principles to establish before delving into the characteristics that are essential for a cloud service. These foundational components greatly facilitate the realization of several, if not all, of the requirements for a cloud service.
3.2.1 Naming conventions
Well-thought out and well-designed naming conventions can be useful and they help to enable the cloud essential characteristics of a service design, as described in 3.3, “Five essential characteristics of cloud” on page 17. These naming conventions create logical relationships between the various resources that comprise a service. For more information about the importance and relevance to the parts of the delivery model, see 3.3.5, “Measured service” on page 23, Chapter 4, “The CICS systems programmer” on page 27, and Chapter 6, “Operational considerations” on page 59.
3.2.2 Standards
The US NIST definition of cloud computing includes the concept of a type of deployment model that is referred to as hybrid cloud, which is described as a composite of two or more distinct cloud infrastructures. Each distinct cloud infrastructure remains and retains its unique attributes, but connectivity or communication across the members of the hybrid cloud is made possible. The portability of data and applications is enabled.
 
This deployment model is the most applicable to most enterprises. Interoperability is key in a hybrid cloud deployment model. Adherence to open standards is critical to maintaining compatibility with a diverse set of services. The importance of standards is relevant, but not limited to, the broad network access essential characteristic that is described in 3.3.2, “Broad network access” on page 20.
3.3 Five essential characteristics of cloud
Various demands from IT services consumers led to the evolution of how IT capabilities are provided. These new delivery methods coalesced into what it is now called cloud computing. Although a significant amount of ambiguity with this term followed its evolution, a de facto standard for definition of the model was eventually established.
NIST went through numerous iterations in establishing its final definition of cloud computing, which is outlined in NIST publication SP800-145 and became the accepted standard. Aside from the service and deployment models that are outlined in the publication, the following essential characteristics were established as defining attributes for a cloud service:
On-demand self-service
Broad network access
Resource pooling
Rapid elasticity
Measured service
These characteristics are used as the guiding principles for design and development of a cloud service. As a service provider, these requirements are central to delivering capabilities going forward.
3.3.1 On-demand self-service
This characteristic specifies that the consumer can autonomously acquire services with no interaction with the service provider. The access point to the service request might be available through some sort of user interface or programmatically; for example, by using an application programming interface (API).
Although not explicitly called out, provisioning the service is expected to be automated. Even with the absence consumer-to-provider interaction, any manual provider actions on the back end should be avoided.
Automation
Providing on-demand self-service often involves a substantial time investment, as automation must be built. Service provider participants with programming knowledge might be needed to contribute to the automation.
A service ultimately is a composition of parts. Examples of parts include data sets, programs, parameter or configuration files, transactions, and definitions. A thorough inventory and review is necessary to identify all of the components that are required for the service instance.
After all components and steps are identified, creating the service can be performed programmatically. The service can be created in various ways by using the tools, languages, and utilities that are most appropriate for the task at hand. This process can include a series of JCL steps, a REXX script, COBOL programs, or some combination of these options and other options. These selections depend on specific use cases and requirements. Details, such as the type of runtime environment that is used, the particular actions to perform, and the type of resources to define determine the appropriate approach and tools to use.
Upon successful automation of the service creation, the capability must be made available to consumers. Generally, use standards-based web services as the channel for invocation of the service provisioning to ensure accessibility to the broadest range of clients.
Resources
Consider the following points regarding automating resources:
Use the CICS resource definition online (RDO) facility for dynamic resource definition and modification in support of service capabilities.
Well thought out naming conventions and the ability to use patterns facilitates the automation efforts and supports some of the other essential characteristics of cloud. It is important to fully understand the naming restrictions of all resource types before deciding on a naming convention.
Use the CICS system programming interface (SPI) commands to automate and modify other system components
For more information about relevant resource configurations and modifications, see Chapter 4, “The CICS systems programmer” on page 27.
Consumer access
The consumer needs some form of entry point or access to provisioning automation. Although some service requests are sent programmatically, most requests often come directly from consumers who are interacting with some form of online menu or catalog of service offerings.
How the customer entry point fits into the on-demand self-service infrastructure is shown in Figure 3-1. This entry point can be a service catalog system, orchestrator, process or workflow system, or home-grown front-end service portal. Similar to cloud services, the use of open standards and a service-oriented approach ensures broad accessibility, manageability, and extensibility of the provisioning framework.
Figure 3-1 On-demand self-service infrastructure
Although this architecture allows provisioning requests to come from any source that can issue the appropriate web service request, the primary entry point for the consumers in Walmart is a self-service portal created by the engineers. This self-service web application is hosted entirely in CICS. It consists of an HTML5/CSS3/Javascript front end that is on a z/OS file system (zFS) and is referenced through URIMAP definitions. The JavaScript code validates form input, formats it as JSON, and sends the request to the back-end web services that start the provisioning automation. Figure 3-2 on page 20 shows the design of Walmart’s self-service web application.
 
Figure 3-2 Self-service web application design
3.3.2 Broad network access
The essential cloud characteristic of broad network access (ubiquitous access) dictates that capabilities are accessible through standard interfaces over a network. In essence, broad network access is delivering capabilities through web services.
Standards
The use of standards is key to enabling the broad network access cloud characteristic. Use of standards for transport (as in TCP/IP and HTTP) and accessibility (such as URI and JSON) ensures that service capabilities are available to the broadest range of platforms and clients. Another benefit that is provided by this model is the resulting abstraction and loose coupling of the clients from the specific systems of service providers.
ReSTful services
Although standards that are associated with SOAP web services were once a primary option, Representational State Transfer (ReST) became the de facto software architectural style for cloud services today. ReST involves a much lighter weight interface with less supporting infrastructure. Unlike SOAP, ReST includes the following characteristics:
Runs over only HTTP and does not support context or state management.
The rules for ReST are much less stringent. SOAP requires XML. ReST often uses JSON, but does not require it.
Access to services via ReST uses the standard Universal Resource Identifier (URI) and Universal Resource Locator (URL) formats across Internet Protocol networks that use HTTP. A URL is a specific type of URI. A URL typically locates a resource on the Internet or within your network, and is used when a web client makes a request to a server for a resource.
A URL for HTTP or HTTPS is made up of up to five components (scheme, host, port, path, and query string). These components are combined and delimited as shown in the following example:
scheme://host:[port]/path[?querystring]
This example includes the following components:
Scheme
The scheme identifies the protocol to be used to access the resource over TCP/IP. It can be HTTP (without SSL or TLS) or HTTPS (with SSL or TLS) followed by a colon and two forward slashes.
Host
This component of the URL can be a host name or an IP address and it identifies the host (system) where services are located. A host system provides the environment where services run and access requested resources. Services on this host might direct work to other systems (systems that host other CICS servers and associated resources) that support the operations of the service.
Port
A specific port number can also be specified in addition to the host component. If a port number is specified, this number follows the host name, separated by a colon. When omitted, port 80 is the default for HTTP and 443 is the default for HTTPS.
In the Walmart z/OS cloud environment, the port is used to provide multitenancy, with all service instances directed at a specific TCP/IP service port number. This approach enables the host and port in the URL to direct work to the specific CICS servers where the service and subsequent resources are defined.
Path
The path identifies the specific resource on the host that the HTTP client wants to access. The path is defined by using a CICS URIMAP resource. When defining the path, you should place an asterisk (*) after the last forward slash (/) in the URIMAP, which enables generic paths to access the same service. It also allows information, such as a record key, to be included in the URL.
The path name begins with a single forward slash and can contain multiple nodes (levels), each separated by a forward slash. The last node of the path ends with a single forward slash. Additional information, such as record key, can be included in the URL after the last slash of the path.
Query string
If a query string is provided, it follows the path component and provides a string of information that the service can use. The query string is prefixed with a question mark (?) character. The format of the query string is free form and specific for the service. A string of name-value pairs that are separated by an ampersand (&) is shown in the following example:
?FirstName=John&LastName=Doe
The scheme and host components of a URL are not case-sensitive, but the path and query string are case-sensitive. For simplicity, the entire URL usually is specified in lowercase. The path can contain one or more nodes (identifiers), which are separated by a forward slash (/), where information after the last node can represent user supplied information, such as a record key.
A URL with one node in the path with a key provided in the query string is shown in the following example:
http://hostdomain:55123/SalesInfo/?key=1234
A URL with two nodes in the path with a key provided in the query string is shown in the following example:
http://hostdomain:55123/SalesInfo/Regional/?key=1234
A URL with two nodes in the path with a key provided in the URL is shown in the following example:
http://hostdomain.55123/SalesInfo/Regional/1234
For more information about ReST, see Creating IBM z/OS Cloud Services, SG24-8324.
 
Note: The Internet Engineering Task Force (IETF) is the authoritative source and should be referenced to ensure adherence to applicable standards.
 
3.3.3 Resource pooling
A wanted outcome of efforts to implement cloud services includes providing secure multitenancy in a shared hosting environment. That environment should manage resources that are pooled, and allow sharing and efficient use of resources in a manner that the resource users are oblivious to other users of the same resources.
Computer resources, such as memory, processing capacity, and storage, are represented as aggregates and appropriate amounts of resources are assigned to individual consumers.
These resources are dynamically allocated, deallocated, and reallocated as needed. By this allocation process, the set of resources that are available are used as a pool to allocate from or to be returned to. The pool size can also increase or decrease as needed to support the various services and tenants that are hosted in the environment. The increase and decrease of the pool size can be dynamic or controlled through operator or administrator intervention.
Inherent pools
If you have z/OS experience, you are familiar with the concept of resource pooling. IBM z Systems evolved from a platform that was designed as a multi-tenant environment with access to pooled-resources. Resource pooling is part of z Systems.
This inherent characteristic of the platform makes it straightforward to enable some amount of resource pooling. Virtually any processing (even existing processes) uses pooled resources to some extent.
Even beyond the resource pooling that is provided as part of the platform, other forms of resource pooling can be configured and used for service delivery; for example, a group of highly available CICS regions to serve tenant requests.
The use of IBM Parallel Sysplex® can greatly expand the available resources of a hosting environment by creating a collection of pools for some resources. Also, larger, broader pools of other resources are distributed across the sysplex.
A combination of automated processes and controls are necessary to assign and distribute resources.
These controls include workload management (WLM) policies and service classes. They facilitate controlling workload priorities. Storage management system (SMS) constructs can be used to control disk storage allocations and performance characteristics. This is not to say that more mechanisms are not needed to accomplish the required level of resource management. Instead, most general needs are covered by default and specific custom needs can be addressed individually.
3.3.4 Rapid elasticity
Elasticity refers to the ability of capacity to adapt to increases and decreases in workload demand. This characteristic is closely related to the pooled resource characteristic in that resource assignment and reassignment are necessary to achieve elasticity.
Scale up or out, and scale down or in
An interesting component of the elasticity characteristic is the scale-back or scale-down action and the primary driving factor behind it. The scale-back or scale-down action is a cost controlling measure that is primarily concerned with avoiding over-provisioned resources. This area is another instance in which the unique nature of z/OS provides some inherent relief in satisfying this characteristic because of negligible cost that is associated with certain resources being over provisioned.
For more information about the role of WML in satisfying rapid elasticity on workload manager, see 4.2, “Service owning region” on page 28 and 5.2, “Workload manager” on page 47.
3.3.5 Measured service
This characteristic is concerned with tracking who is using which resources and reporting the respective information back to the involved parties, such as consumers, providers, operations team, and capacity planners. Resource usage must be monitored and reported at the appropriate level of granularity for the type of service and the intended audience.
Consumers often receive an abstracted, higher-level contextual view of the usage data. From this perspective, the information usually is provided on a per unit basis, such as number of requests, number of users, and megabytes used. This abstracted level of per unit reporting requires that a model or an approach is created to aggregate the costs of the various lower-level resources that are required to provide the service.
This abstracted view also can be useful for service providers, but providers also require more granular visibility. The service providers often are concerned with the operational health of the environment and the services that are being hosted there, so a deeper view of resource utilization is needed.
From these different perspectives, metering information becomes used for the following purposes:
Provides information about the IT costs that result from resource use and enables service providers to put in place chargeback or showback mechanisms.
Is used to direct resource assignment decisions.
Is relied upon to determine health of the environment.
Is used for other various operational considerations and decisions.
For the caching service, Walmart’s engineers elected to aggregate the reporting view at peak request volume to keep it simple for the consumer.
The service consumers receive a view of total requests volume per 10-minute interval over time. The service consumers are expected to maintain peak activity within a 24-hour period close to projected values.
The service providers monitor this same information. They also review capacity usage per transaction (that is, per service request) at the region level, and at the LPAR and sysplex levels to capture different views into the operational state.
A benefit of this visibility (particularly from the consumer perspective) is that it should ultimately promote self-governance of resource usage. Consumers can monitor the cost of their usage on an on-going basis, and they have the data that they need to make informed decisions on continued operational cost assignments.
Naming conventions
Well thought out naming conventions are important and their use is highly recommended to identify and report on resources. Having identifying characteristics within the names of resources, such as data set names and transaction IDs, can make it easier to tie together resources and a consumer with native reporting tools. This ability must be balanced with the naming restrictions that are associated with each resource type. Use of naming conventions might not be possible in all cases; therefore, other mechanisms must be employed to track resource usage.
3.4 Microservices architecture
The microservices architecture style is an emerging approach to application development with applications being composed of collections of independent, limited function services that are commonly accessed via HTTP.
3.4.1 Application design pattern
The overall design pattern of a microservices-based application involves a more compartmentalized approach to development than traditional monolithic methods. It is similar in some regards to service-oriented architecture (SOA), but it is lighter weight, less rigid, and more flexible.
3.4.2 Independent, limited function services
Historically, applications tend to consist of logical components. The microservices design can be viewed as the externalization of these logical components to individual independent functions delivered as services. As shown in Figure 3-3 on page 25, the result is the ability to compose applications in a building block type manner.
 
Figure 3-3 Microservices architecture
Although this approach can introduce some complexity, it also provides significant value. Flexibility in deployments, avoidance of rework, greater manageability of individual functions, and scalability of distinct components all contribute to more dynamic and agile applications.
In this IBM Redbooks publication, this concept applies to net new capabilities that are delivered as services. It also likely involves the decomposition of applications into modularized services to tap into their value. For more information, see Creating IBM z/OS Cloud Services, SG24-8324.
3.4.3 Encouraged by cloud delivery model
The modularized microservices also must be acquired or consumed in a dynamic fashion. The cloud delivery model provides this capability. The ability for developers to quickly attain broadly accessible, efficient, measured, and scalable service instances for composition into applications unlocks creativity and responsiveness to business needs.
3.5 Summary
The service provider is a multifaceted role that requires numerous technical disciplines to deliver useful services. Service providers are responsible for supplying a capability and delivering that capability in a way that is beneficial to the consumer.
The creation of a capability comes from industrious engineers and is guided by technical requirements from the consumer. A valuable delivery approach for that capability can be achieved by following the cloud computing model.
This chapter reviewed numerous aspects of how CICS on z/OS can be an ideal platform for both service creation and service delivery via the cloud model.
For more information about various features of the platform that should be considered for providing services, see Chapter 4, “The CICS systems programmer” on page 27 and Chapter 5, “The z/OS systems programmer” on page 45.
 
 
 
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.17.166.34