Enterprise data access patterns
This chapter describes the key database topics to consider when you evolve a monolith application to a microservices architecture. It describes the challenges you might encounter and the patterns you can use to meet those challenges. It shows an example of the necessary tasks to move from monolith to microservices on a Java application.
This chapter includes the following sections:
4.1 Distributed data management
A common monolith application usually has one or two relational databases that include all the information needed for the system to work. This monolith application is usually managed by a specific team. The application’s architecture makes adding new functionalities and scaling and changing the application a slow and high-risk process.
In a microservices architecture, each microservice must have its own database that allows new functionalities (that is, new microservices) to be added quickly and with low risk of affecting other functionalities of the application. This approach also allows the use of the appropriate tool for each functionality. For example, you might use the Data Cache for Bluemix service or a Redis database to store key value information about data that is common for the application. This data might be a list of cities, stores, or airports. The appropriate tool might also be a relational database to store transaction (order) information that must have consistency among the various entities that are related to the process.
This new approach offers advantages (see Chapter 1, “Overview of microservices” on page 15), but moves from a point of centralized data management to distributed data management. This approach brings new challenges that must be considered.
4.2 New challenges
When moving from centralized data management to having multiple tools and databases, challenges that were not present in a monolith application exist:
4.2.1 Data consistency
By having a single database for a complete application, you can get, modify, and create various records in only one transaction, giving you the security of having the same view of the information all the time. In a monolith application, the inconsistency problem occurs when the application grows large and you need to gain more speed in the queries. As a result, you create cache services or send common information one time to the client device (for example, mobile device, HTML, desktop application, and so forth). This reduces the server calls, which then produces the problem of needing to ensure that all stages (that is, database, cache, and client) have the same information.
In a microservices architecture with distributed data management (multiple databases), the inconsistency problem occurs from the beginning of development. Whether the application is big or small does not matter. For example, to create a new order of a ticket in a rewards system where a person orders air tickets with reward points, you must first get the current points total of the person. You must also get the cost in points of the air ticket, and then, after comparing both values, create the order.
This process in a microservices architecture has the following stages:
1. Get the price of the ticket in points from the Catalog microservice.
2. Get the total amount of points of the person ordering the ticket from the Rewards microservice.
3. Compare the points cost and the amount of points in the business logic level of the Orders microservice.
4. Create the ticket order from the Orders microservice.
5. Create the reservation of the flight from an external service for reservations.
6. Update the number of points of the person creating the order from the Reward microservice.
This process has three separate microservices, each with a different database. The challenge is to share the same information among all the microservices. This is to ensure none of the information changed as the ticket completes and to ensure that the complete information is updated in each microservice at the end of the transaction.
The consistency problem is common in distributed systems. The CAP theorem states that in a distributed system, consistency, availability, and partition tolerance (CAP) cannot occur at the same time; only two of these characteristics can be guaranteed simultaneously.
For a simple explanation of the CAP theorem, see the following website:
In today’s environment, availability is the most important choice. Therefore, two patterns can be considered as solutions to the consistency problem:
Pipeline process
This process might seem like the easy solution for the consistency problem, and it works for scenarios where you need to follow a set of steps to complete one process (transaction). Continuing with the example of a rewards system, with the pipeline pattern, a single request to the Orders microservice starts a flow between microservices, as shown in Figure 4-1 on page 54.
Figure 4-1 Pipeline pattern
Although the pipeline process shown in Figure 4-1 is common, this might not be the best solution because it creates a dependency between services. Therefore, if you make a change in the Catalog or Reward microservices, you must change the Orders microservice.
Event-driven architecture pattern
The other solution for the consistency problem is the event-driven architecture, where the microservices communicate with each other by publishing an event when something important happens in their databases. For example if a record is created or updated in a microservice and other microservices subscribe to that queue of events, when the subscribing microservices receive the message, they update or create their own records. This record update or creation might lead to a new event being published.
By using events, you can create transactions that are related to multiple microservices.
Referring to the previous example, the steps and events necessary to complete the transaction are as follows:
1. The client creates an order calling the Orders microservice. This microservice then creates an order record in its database with an in-process status and publishes an order created event (see Figure 4-2).
Figure 4-2 In-process status record created
2. The Rewards and Catalog microservices receive the order created event message and update their respective records to reserve the points and to hold the ticket price. Each one then publishes a new event (see Figure 4-3).
Figure 4-3 Confirmation form microservices
3. The Orders microservice receives both messages, validates the values, and updates the created order record state to finished. It then publishes the order finished event (see Figure 4-4 on page 57).
Figure 4-4 Record state updated to finish
4. The Rewards and Catalog microservices receive the final event and update the respective records to an active state (see Figure 4-5).
Figure 4-5 Final update of related records
With this pattern, each microservice updates its database and publishes an event without a dependency on other microservices; it is ensured that the message broker delivers the message at least one time to the related microservices. Some benefits are achieved thus far, however, consider that this is a complex programming model so now the rollback transactions must be programmed. For example, suppose the user does not have enough points to order the ticket. A transaction that cancels the created order must be created to resolve that situation. Section 4.2.2, “Communication between microservices” describes another pattern that might help with this consistency problem.
4.2.2 Communication between microservices
As 4.1, “Distributed data management” describes, new challenges must be met in addition to dealing with data consistency. The next challenge is communication between microservices. In a monolith application, having only one database allows information to be queried from different tables with no problems. At this point, however, defining how to consolidate information from separate microservices is necessary before delivering the information to the client.
Continuing with the airline rewards system example in 4.2.1, “Data consistency” on page 52, you create a list of tickets in the catalog containing all the reviews that users made about places. You must get that information from two databases:
The list of places (from the Catalog microservice database).
All the comments about those places listed in the catalog microservice (from the Reviews microservice database).
After you get the information from the databases, the consolidated information must be sent to the client. In a microservices architecture, one microservice must not query the database from another microservice because this creates a dependency between them. With the approach in the example, each database must be accessed only by the API of its microservice. For cases like this, use the Gateway pattern.
API Gateway pattern
The API Gateway pattern consists of the creation of a middle tier to provide additional interfaces that integrate the microservices. This middle tier is based on an API Gateway that sits between the clients and microservices; it provides APIs that are tailored for the clients. The API Gateway hides technological complexity (for example, connectivity to a mainframe) versus interface complexity.
The API Gateway provides a simplified interface to clients, making services easier to use, understand, and test. It does so by providing different levels of granularity to desktop and browser clients. The API Gateway can provide coarse-grained APIs to mobile clients and fine-grained APIs to desktop clients that can use a high-performance network.
In this scenario, the API Gateway reduces chattiness by enabling clients to collapse multiple requests into a single request optimized for a given client, such as the mobile client. The benefit is that the device then experiences the consequences of network latency one time, and it leverages the low-latency connectivity and more powerful hardware on the server side.
Continuing with the example, to get the list of places and reviews, create an API Gateway exposing a service to get the consolidated list of places with their reviews. This gateway makes a call to the Catalog and to the Rewards microservice, consolidates the information, and sends it to the client, as shown in Figure 4-6.
Figure 4-6 API Gateway pattern to aggregate information from separate microservices
This pattern can help solve inconsistency and data aggregation problems. However, it presents a new risk that is related to the system performance (latency) because now there is an additional network point, which is the entry point for clients. Therefore, it must be able to scale and have good performance.
Shared resources pattern
To gain more autonomy in a microservices architecture, each service must have its own database, and databases must not be shared or accessed from a different microservice. However, in a migration scenario, a difficult task might be to separate some entities that are already working. Instead of having problems with decoupling all the entities, an easier way to solve this special case exists.
For cases in which entities are highly coupled, a different solution that might be seen as more than an anti-pattern can help to solve this special condition. The solution is to have a shared database. In this form, instead of having two microservices and two databases, one for each microservice, you have only one database that is shared between the two microservices. The important key for the shared resources pattern to work is to keep the business domain close (that is, use it only when the entities are related to the same business domain). Do not use it as a general rule in the architecture but as an exception. The example in Figure 4-7 shows how this pattern works.
Figure 4-7 Shared resource pattern to share information between services
Because this pattern is used with a business domain approach, it might also eventually have only one microservice to manage two entities. So, for this example, if the application has reviews for the places that are in the Catalog database, you can consider these reviews as an attribute of the ticket and leave them in the same database and the same microservice. Ultimately, you have the Catalog microservice with all the information that is related to the tickets, which includes their reviews.
4.3 Integration with the monolith application
In the evolution of a monolith application to use microservices, not rewriting or creating a new one, certain situations can arise where the complete application cannot evolve or the evolution of the application occurs in stages. Therefore, you must be sure that the evolved and new components of the application are able to communicate and work together with the unchanged parts of the monolith application (for example, transactional systems that are too complex to change).
In cases where you plan to keep one or more parts of the monolith application, an important factor is to consider this part as one more microservice. You can then integrate it the same way you integrate other microservices.
4.3.1 Modifying the monolith application
Various approaches to refactoring an application are described in 3.4, “Refactoring” on page 42. The result of refactoring a monolith application yields separate microservices, each with a separate database. In most cases, you keep alive some parts of the monolith application. The goal for the parts of the monolith application is to make them communicate in the same way that the other microservices are communicating. To do this involves the patterns and the technology stack in which the monolith application was developed.
If you use the event-driven pattern, make sure that the monolith application can publish and consume events, including a detailed modification of the source code to make these actions possible. This process can also be done by creating an event proxy that publishes and consumes events. The event proxy translates them to the monolith application so that the changes in the source code are minimal. Ultimately, the database remains the same.
If you use the API Gateway pattern, be sure that your gateway is able to communicate with the monolith application. To achieve this, one option is to modify the source code of the application to expose RESTful services that are easy to consume by the gateway. This can also be achieved by the creation of a separate microservice to expose the monolith application procedures as REST services. The creation of a separate microservice avoids big changes in the source code; however, it includes the maintenance and deployment of a new component.
4.4 Which database technology to use
The capability to choose the appropriate tool for each functionality is a big advantage, but it leaves the question of which database is best to use. The objective of this section is to help you understand the various technologies and use cases for each database. Then, when you decide to evolve your application from a monolith to a microservices architecture, you can also decide whether changing the technology you are using is a good approach. In simple terms, this decision can be summarized as SQL or NoSQL.
4.4.1 Relational databases
Relational databases (known as SQL databases) are well known; you might already use a relational database as the database of the monolith application you want to evolve. SQL databases are used in almost all applications that need to persist data, no matter what the use case or scenario is.
Some of the characteristics provided by a relational database follow:
Normalized data to remove duplication and reduce the database size
Consistency and integrity in transactions with related data (for example, atomicity, consistency, isolation, and durability, or ACID)
A relational database is not the best option in these cases:
Fast access to big amounts of data
Hierarchical data
Key-value data (that is, caches and static data)
Rapidly changing schemas or multiple schemas for the same data
4.4.2 NoSQL databases
The term NoSQL comes from the fact that almost all systems were developed in an SQL database, even if the data was not relational; the term NoSQL clarifies that the database is not an SQL database. The term refers to databases that cover all types of data, for example key-value, documents, and time-series data.
Today there are various NoSQL database options, each one to achieve a specific outcome. In microservices, each service must have its own database. This allows you to evaluate the database both from its integration capabilities and from the specific case or problem that must be solved.
Some examples of NoSQL databases are as follows:
Document databases to save documents and their data. The following example cases are are for saving sales information including the store, seller, client, and products in the same document:
 – Apache CouchDb: JSON documents
 – MongoDB: JSON documents
Graph databases to store data with a natural relationship. An example includes social network data or customer data, including past purchases, to make real-time suggestions based on interests:
 – Apache Tinkerpop
Key-value databases used for high availability, reference data, or caching memory. An example is a list of stores in each city common to different parts of the application:
 – Redis, in memory database
 – IBM Data Cache service on Bluemix, in memory database
Columnar databases to store large amounts of isolated data for analytics, for example, all the user interactions in a website to make real-time analytics of the user’s behavior:
 – Druid
 – Cassandra
In environments today, NoSQL databases are generally accepted. With so many solutions, the challenge is to find the correct one to solve your problem. In this case, the simplest solution might be the best solution, starting with the analysis of the technologies you already know and whether they are the correct solution for your problem. Learning a new technology, however, might be easier, rather than trying to adapt the one you already know. Also, with cloud computing, installing and managing every component is not necessary, so you can immediately start using the tools you need.
4.5 Practical example: Creating the new microservices
As described in 3.5, “Identifying and creating a new architecture example” on page 46, the evolution of the Java monolith application is divided into four microservices. These microservices are composed of a new user interface (UI), a new microservice to search products, an evolved customer microservice, and the Orders microservice.
The first microservice to create in this example is the new Search microservice, divided into the following stages:
 – Cloud computing environment
 – Run time
 – New database technology
 – Secure integration
 – Data transformation
 – On-premises database secure connection
 – Extract, transform, and load (ETL) the data
4.5.1 New microservice catalog search
As described in Chapter 1, “Overview of microservices” on page 15, a better search system is needed for the application so users can find the products they need in an easier way. This new search must be quick and composed of dynamic rather than specific filters.
In the monolith application, you can filter the products list only by the category of the product, and these categories are created in the SQL database called ORDERDB. Here, one product belongs to one or more categories and one category can have one or more products. Figure 4-8 shows how the products and categories relate (that is, many-to-many).
Figure 4-8 Product categories relational model
Select the technology stack
Select the appropriate technologies for this kind of search. Also, select the appropriate environment to deploy, create, run, and manage this microservice.
Cloud computing environment
As described in 3.5.2, “Architecture: To be”, this example uses a platform as a service (PaaS) to gain more speed and flexibility. Start evaluating the options and decide which one is the best fit for this solution.
Although various PaaS providers are available, this example uses IBM Bluemix because it gives flexibility in the deployment options: Cloud Foundry, Docker containers, and virtual machines. These options are based in open technology. IBM Bluemix also has a rich catalog of services that has all the tools needed to create the complete solution.
Run time
In a microservices architecture, each microservice can be written in a different language. Although choosing a language might be difficult because so many are available, this example is not focused on why one is better than the other. Because this example creates a RESTful API only to search the products, Node.js is selected with the express framework for its callback-based asynchronous I/O that allows non-blocking interactions. It is selected also for its speed in creating APIs.
Database
As described in 1.3.1, “Fictional Company A business problem”, improving the search options is necessary so that the process is easier for customers. The search must be dynamic text and with a relevance or score of what the users want, not by filters that developers define.
Relational databases are not the best solution for this kind of job for the following reasons:
You must create full text fields and then create indexes, both of which increase the database size.
You must base the searches on the columns (fields) of the table, which means only adding new filters of the product attributes and not creating a dynamic search.
Relational databases do not work quickly when millions of records exist.
Therefore, the NoSQL database option is the choice for this example.
In the IBM Bluemix catalog, several selections are available for managed NoSQL database services. Because the objective of this example is to improve the search options, the Elasticsearch-by-compose service is selected for the following reasons:
It has full-text search.
It is JSON document-oriented where all fields are indexed by default and all the indices can be used in a single query.
It is without schema, which allows the addition of new attributes to products in the future.
It works with a RESTful API; almost any action can be made using a simple HTTP call with JSON data.
The Compose service deployment scales as the application needs, and it is deployed in a cluster by default.
Secure integration
Creating the new microservice involves migrating all the information from the product and category tables to the new database (Elasticsearch). The first step in the process is to create a secure integration between the on-premises database and Bluemix. This step involves using the Secure Gateway service in the IBM Bluemix catalog, which provides secure connectivity from Bluemix to other applications and data sources running on-premises or in other clouds. Use of this service does not require modifying the data sources and does not expose them to security risks.
For more information, see the following website:
Data transformation
As noted in “Secure integration” on page 64, all product information in the tables must be migrated to Elasticsearch in Bluemix. The data must first be transformed from the tables and relations to a JSON model. The advantage of using a JSON model is that you can show the information the way you want in the presentation layer by saving it with the schema you need. This is better than showing it based on how the data is saved and the relations it has with the other tables.
The main objective is to provide a better search service for the user: how does the user want to search or view? Based on that information, the document is created. In the monolith application, the search is based on the categories of the product because that is how the data is saved. However, now it can be saved based on the products and can use the category as one more attribute for the search. Figure 4-9 compares the relational model and the JSON model.
Figure 4-9 Product relational model compared to JSON
As Figure 4-9 shows, the same PRODUCT_ID and CAT_ID are kept to continue working with the monolith orders system.
After you have the new JSON model, you define how to extract, transform, and load the data into Elasticsearch. To help with these types of tasks, Bluemix offers services such as the Dataworks service. This service gets data from on-premises databases or in other cloud infrastructures and loads them into a Bluemix Data service, or the IBM DataStage® on Cloud service designed for complex data integration tasks. Although those are useful services, the current version of Dataworks does not support loading data into Elasticsearch, and the DataStage service is too advanced. Thus, for this example, an ETL tool is created as an API inside the Catalog search application in Bluemix.
Figure 4-10 shows the architecture of the new Catalog search microservice.
Figure 4-10 Catalog Search microservice architecture
Migrate the data to a new database
The steps for migrating all product information to the new Elasticsearch database involve securing a connection with the on-premises data center and then extracting, transforming, and loading the data.
On-premises database connection
Use the following steps to select the Secure Gateway service in Bluemix as the secure way to connect to the on-premises data center and then test the connection:
1. Open the IBM Bluemix website:
Click either LOG IN (to log in with your IBMid) or click SIGN UP (to create an account).
2. Create the Secure Gateway service:
a. Go to the Bluemix Catalog.
b. Select the Secure Gateway service in the Integration category.
c. Click Create.
3. Configure a gateway in the Secure Gateway dashboard. This option creates the connection point in Bluemix to connect to the on-premises or cloud services in a secure tunnel:
a. Click the Add Gateway option from the Secure Gateway Dashboard.
b. Insert the name of the gateway and select the security options and then click ADD GATEWAY. See Figure 4-11.
Figure 4-11 Add Gateway options
4. Add Destination for the Secure Gateway. Use this option to configure the on-premises resources that connect from Bluemix:
a. Click the Gateway you just created.
b. Click Add Destination.
c. Select On Premises and click Next.
d. Enter the host name and port (or IP address) of the resource you want to connect to, in this case the IP Address of the IBM DB2 server. Your window is now similar to what is shown in Figure 4-12. Click Next.
Figure 4-12 Secure Gateway Add Destination
e. Select TCP as the protocol to connect to DB2 and click Next.
f. Select None as the authentication method and click Next.
g. Leave the IP Tables rules empty and click Next.
h. Enter a name to use for the destination, in this case Monolith DB2, and click Finish.
Your Monolith application gateway window is now similar to Figure 4-13.
Figure 4-13 Monolith application gateway with DB2 destination created
The Monolith DB2 destination shows details (see Figure 4-14) such as the cloud host and port to use to connect to the on-premises DB2.
Figure 4-14 Monolith DB2 destination details
5. Now you can configure the gateway client. This client is the endpoint that runs on the on-premises data center to allow the secure connection. Use the following steps:
a. On the Monolith application gateway dashboard, click Add Clients.
b. Select a gateway client option to use (this example uses the Docker option):
 • IBM Installer: Native installer for different operating systems
 • Docker: A Docker image that is easy to deploy as a Docker Container
 • IBM DataPower®: A client to install in IBM DataPower
c. Run the docker command to start the gateway client:
docker run -it ibmcom/secure-gateway-client <gatewayId> --sectoken <secure token>
Note: The <secure token> is the token provided in the gateway client selections window.
Several seconds after running the gateway client, a confirmation is displayed (Figure 4-15 on page 69).
Figure 4-15 Gateway client connected confirmation
d. After installing and executing the Gateway client, configure the Access Control List (ACL) to allow the traffic to the on-premises database by entering the following command in the gateway client console:
acl allow <db2 ip address>:<port>
For this example, the command is as follows:
acl allow 1xx.xx.xx.x2:xxxxx
e. Test the connection to the DB2 database from your preferred DB2 client by using the cloud host and port given in the destination that is configured in step h on page 67.
Figure 4-16 on page 70 shows an example connection test.
Figure 4-16 DB2 connection test using the information provided by the secure gateway
Extract, transform, and load the data
Now that the secure connection to the on-premises database is established, create the script that extracts all the products and their related categories, transforms each record into JSON, and posts it to the Elasticsearch service in IBM Bluemix:
1. Create the Elasticsearch service:
a. Create a Compose account in:
b. Create the Elasticsearch deployment:
i. Enter the deployment name.
ii. Select the location you want to use.
iii. Select the initial deployment resources.
iv. Click Create Deployment.
Now you can see all the information of your deployment, status, connection information and usage.
c. Add a user to the deployment:
i. Select Users → Add User.
ii. Enter the user name and password.
d. Configure the deployment information in Bluemix:
i. Log in to IBM Bluemix:
ii. Select the Elasticsearch by Compose service in the Data & Analytics category of the Bluemix Catalog.
iii. Enter the user name and password of the user you created on the Elasticsearch deployment.
iv. Enter the host name and password of the composed deployment. This information is in the Elasticsearch deployment Overview, under the Connection information section.
v. Click Create.
The connection information of the Elasticsearch deployment is displayed and is ready to use as a Bluemix service (Figure 4-17).
Figure 4-17 Elasticsearch connection information as a Bluemix service
2. Create the ETL application. To create the ETL application for this example, a small Node.js script was developed. This script connects to the DB2 database using the secure gateway cloud host information. It also uses a join to query the products and catalog tables, creates a JSON array with the format defined in the “Data transformation” on page 65, and inserts it into Elasticsearch using its bulk API.
The repository for the code can be accessed on GitHub. See Appendix A, “Additional material” on page 119 for more information.
The following is important information about the script:
 – This script is included as an API inside the Search application described in “Create the API to search the catalog” on page 72.
 – The connection for DB2 and Elasticsearch variables are defined in the catalogsearchconfigconfig.js file.
 – The script that extracts, transforms, and loads the data is located on the catalogsearchcontrollersetl.js file.
 – Line 15 of the catalogsearchcontrollersetl.js file is the query that is run to extract the data.
 – Lines 26 - 39 create the JSON array of the products.
 – Line 40 calls the Elasticsearch bulk API with the JSON array created.
 – From a web browser, you can run the script by entering the path defined in the catalogsearch outesapiv1etl.js file.
 – You can deploy this application on Bluemix, by modifying the name and host variables in the catalogsearchmanifest.yml file and running the cf push command from the Cloud Foundry command-line interface (CLI.)
3. Keep data in sync. For this example, all product information is now in the Elasticsearch deployment. Because this example is a one-time job, creating synchronization tasks between the databases is not part of this discussion. However, if you want to keep your data synchronized, you can find strategies at the following website:
Create the API to search the catalog
Now that the product information is in the Elasticsearch deployment, you can create a Node.js application to expose an API to search the products from the various clients (web pages and mobile applications).
Searching for a product in this monolith application involves selecting a category and then looking product by product, as shown in Figure 4-18.
Figure 4-18 Monolith application product search
To make the user experience easier, provide a full text search by entering only one text field and calling a new search API. Figure 4-19 shows an example of the search of a category by sending only its name to the API.
Figure 4-19 New Catalog Search API response
See Appendix A, “Additional material” on page 119 for the catalogsearch application source code.
The following parts are the important parts of the application:
The connection variables are defined in the catalogsearchconfigconfig.js file.
The search API route is defined in the catalogsearch outesapiv1products file and is called by entering the URL with the following format:
http://<appurl>/api/v1/products/search/<search terms>
In the following example, drama is the text that the user enters when searching for the movie:
https://catalogsearch.mybluemix.net/api/v1/products/search/drama
The search to the Elasticsearch documents is made using the elasticsearch module. This is done by calling the Elasticsearch API to look for the entered text in all the documents in the index, which includes its name, categories, price, and description. This part of the code is at line 64 in the catalogsearchcontrollersproducts.js file.
You can deploy this application on Bluemix by modifying the name and host variables in the catalogsearchmanifest.yml file and then running the cf push command from the Cloud Foundry CLI.
Summary
The new Catalog search microservice is now created using the Elasticsearch database with all the product information that was in the monolith application relational database using DB2. The example now continues the evolution of the monolith application with the evolution of the accounts data model.
4.5.2 Evolution of the account data model
As described in Chapter 1, “Overview of microservices” on page 15, the application must also be evolved by adding social network data to the customer information. This provides a better view of clients and allows you to make detailed suggestions about the company’s products based on the customer’s social interactions.
This microservice is defined as a hybrid microservice (in 3.5.2, “Architecture: To be” on page 47). It allows you to continue working with the personal information of the client in the DB2 on-premises database, and it allows you to add the social networks data in a new database.
Selecting the technology stack
For this microservice, continue working with IBM Bluemix as the PaaS provider, and continue working with its Secure Gateway service for the integration with the on-premises database. The following technologies are selected for this example:
Database
If you look at the information provided by the User APIs of two large social networks, you see that this data changes all the time and that it differs for every user; it does not have a predefined schema that fits everyone. Therefore, for this example, a NoSQL database is selected because it allows for a schema-less data model with the ability to change easily and quickly. IBM Cloudant® is the selected database.
Runtime
Continue using Java as the runtime of this microservice because this microservice provides information from two data sources (DB2 on-premises and Cloudant). Also, the methods to get the personal information of the clients in the monolith application are already developed in Java. This takes advantage of all the code and knowledge you already have.
On-premises database Integration
To have a secure integration to get the personal information of the customers, use the same Secure Gateway infrastructure that was installed previously (see “On-premises database connection” on page 66).
Figure 4-20 on page 75 shows the new architecture of the evolved Accounts microservice.
Figure 4-20 Accounts microservice architecture
Social information data model
Cloudant is already selected as the database for the social information. Now, create the service and the accounts database to use later with the Node.js application to expose the accounts API:
1. Open the IBM Bluemix website:
Click either LOG IN (to log in with your IBMid) or click SIGN UP (to create an account).
2. Create the Cloudant service:
a. Go to the Bluemix Catalog.
b. Select the Cloudant service in the Data and Analytics category.
c. Click Create.
3. Create the database:
a. Select the service under the Bluemix Console.
a. Go to the Cloudant Dashboard by clicking Launch.
b. Click Create Database at the top right of the dashboard.
c. Enter accounts in the Create Database field (Figure 4-21 on page 76).
Figure 4-21 Create Cloudant database
4. Create the accounts view:
a. Under the accounts options, click the plus sign (+) to the right of Design Documents and then select New View.
b. Provide the following information for the fields (Figure 4-22 on page 77):
 Design Document New document
 Index name accounts
 _design/ all
 Map function The function that emits the user name as the key and the complete document as the value
 Reduce (optional) None
c. Click Create Document and Build Index.
Figure 4-22 Cloudant Accounts design document view
d. An example of the social data JSON model extracted from two of the biggest social networks is in “Social networks JSON data model example” on page 119.
You can add this information as a new JSON Document in the accounts database by clicking the plus sign (+) next to the All Documents option and then clicking New Doc.
e. In the New Document window, add a comma after the _id that is generated by Cloudant and then paste the information from “Social networks JSON data model example” on page 119. Figure 4-23 on page 78 shows the result.
Figure 4-23 Cloudant New Document with the Social networks information of the customer
 
Note: This example copies the social network data manually from the APIs provided by two big social networks. For information about how to do this from your application, see the following websites:
Creating the new Accounts API
The NoSQL database service to save the new social information of the customers is selected and created so you have a secure integration between IBM Bluemix and the on-premises database. The next procedure is to create the API that gets the information from the two data sources (on-premises DB2 for personal information and Cloudant for social information) and creates a consolidated JSON model to send to the users.
To create this evolved microservice, follow the strategy described in 3.4.3, “How to refactor Java EE to microservices” on page 43. Here are the steps:
1. At this point, you know what needs to be split, so create a new WAR application responsible for the Accounts information by using Eclipse.
 
Note: Dependencies must be added to the project because the monolith application is using the Apache Wink JAX-RS implementation to expose REST services. This implementation is not supported in WebSphere Liberty (since mid-2015) because it started supporting Java EE release 7, which provides its own native JAX-RS implementation.
a. Create a Dynamic Web Project.
b. In the New Dynamic Web Project options window select WebSphere Application Server Liberty as the Target Runtime, clear the Add Project to an EAR check box, and click Modify for the Configuration option.
c. Select these options and then click OK (Figure 4-24):
 • Dynamic Web Module
 • Java, JavaScript
 • JAX-RS (Web Services)
 • JAXB
 • JPA
Figure 4-24 Dynamic Web Project configuration
d. Click Finish.
The New Dynamic Web Project configuration window opens (Figure 4-25).
Figure 4-25 Dynamic Web Project page
2. Create the same package structure as in the monolith application to save the standards and practices used by the Java development team.
3. Copy all the code that is related to Accounts from the monolith project and put it in the new Java project.
4. In the new project, modify the source code to consolidate the social network information in the Cloudant database with the DB2 information.
5. Create a new REST service to get the complete user information.
For this microservice, the AccountService application was developed. For information about how to get the application, see Appendix A, “Additional material” on page 119.
 
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.144.237.77