Chapter 7. Discovering microservices for consumption

This chapter covers

  • Why service discovery is important
  • How to register a microservice so it can be discovered by clients
  • Which service registries are supported by Thorntail
  • How to look up a microservice within a client

As part of decomposing pieces of the Cayambe monolith into separate microservices, you’ve decided that you need a service for processing order payments. This new microservice will then be used within the Cayambe monolith in chapter 10.

Dozens, if not hundreds, of providers offer payment processing services. Initially, you’ll develop basic integration with Stripe (https://stripe.com/docs/quickstart). To facilitate future expansion of payment providers, you’ll integrate with Stripe in its own microservice. The new payment microservice will then use the Stripe microservice to process and record the payment with the Stripe online service.

In previous chapters, you’ve seen how to access separate microservices directly by referring to the URL where a microservice is running. In this chapter, you’ll take calling microservices a step further by decoupling your client from the microservice it’s consuming, making it easier to scale.

Unless you’re developing a microservice for your own use, it’s virtually guaranteed that you’ll need the ability to scale the number of instances of your microservice in production. Without the ability to scale, your application will always have problems coping with the load placed upon it by users.

7.1. Why does a microservice need to be discovered?

Taking the approach to microservices from chapter 6, the new Payment microservice would locate Stripe via a hardcoded URL, as shown in figure 7.1.

Figure 7.1. Microservice direct lookup

This approach is perfectly fine for local testing of microservices, to make sure they do what you want, but you don’t want to be relying on locating microservices with hardcoded strings in production! The operational nightmare of moving a single instance of a microservice from one environment to another would require any of its clients being rebuilt to use the new URL location of the microservice! That’s not conducive to delivering business value in a timely manner. That’s not even taking into account the desire to have more than one instance of a microservice available for handling requests to better scale an application.

If you kept relying on hardcoded URLs, your Payment microservice would contain an ever-growing list of possible instances for Stripe to distribute its requests across, as shown in figure 7.2.

Figure 7.2. Microservice direct lookup with multiple instances

This architecture also requires the client, Payment, to have code designed to spread the load across the instances of Stripe. When the client is responsible for determining which instance of a microservice to consume, this process is known as client-side load balancing. The client is making a determination about which instance to consume.

Developing load-balancing techniques for the microservices you need to consume isn’t where you want to be spending your precious development time. Ideally, you want a framework or library to handle that complexity for you, allowing your code to request a single instance to operate on.

What can be done to reduce some of the pain of your situation? Enter service discovery!

7.1.1. What is service discovery?

Service discovery is the means by which one microservice retrieves the physical location of another microservice, at runtime, for the purpose of consuming it. Service discovery requires the use of a service registry. Otherwise, there’s no place from which the discovery process can retrieve the URL.

How does adding service discovery into the flow of consuming a microservice affect the way your consuming microservice operates? See figure 7.3.

Here’s how your Payment microservice makes a call to the Stripe microservice by discovering it through a service registry:

  1. The Payment microservice requests the locations for the Stripe microservice from a known service registry.
  2. The service registry returns all the available Stripe instances.
  3. The Payment microservice sends a request to the Stripe microservice instance retrieved from the service registry.
  4. The Payment microservice receives a response from the Stripe microservice.
Figure 7.3. Service discovery

The process seems simple enough, but how does service discovery work? First, you need to have a place to look to find the microservices you need. That’s the role of the service registry.

You now have a place to look for the microservices you need, but that doesn’t mean a lot if it’s empty! Anytime an instance of a microservice is started, it needs to contact the service registry to provide it with a name and a URL location where it can be accessed. The name doesn’t need to be unique, but all microservice instances registered under an identical name do need to expose the same API. If they don’t, any client of those microservices is going to see very different and unexpected results!

After a service registry is populated with data, your client microservice can ask it to provide the URL locations of all instances for a specified service name. At this point, it’s up to your client as to how it determines which location to use when consuming the microservice.

Depending on whether you’re using a framework or consuming microservices without one, several algorithms can be used to choose a specific location. The algorithm could be as simple as cycling through each location in order, known as a round-robin. Or the algorithm can increase in complexity by taking into account factors such as current load and response times.

As we discussed earlier, your microservice shouldn’t be developing custom load-balancing algorithms. If a microservice needs anything more complicated than a basic round-robin, or a random choice from a list, you should consider including a library to provide those algorithms for you.

Whether your microservice uses a simple load-balancing algorithm internally, or you use a library for it, there is the question of how long to retain the instance URL you’ve been given. In an ideal world, you wouldn’t retain the instance URL for any length of time, allowing instances of microservices to come and go without affecting clients. If at all possible, you’re in a better place if you can start with this approach.

If an environment isn’t suited to real-time service discovery every time, your microservice shouldn’t hold onto any physical URLs for more than 10 to 15 seconds. That may not seem like much, but a microservice instance can quickly go from functioning to failing. An extra burden with caching URLs is that your code needs to be more vigilant about catching network failure and microservice errors, either with retries or by using service discovery to retrieve a fresh instance.

7.1.2. What are the benefits of service discovery and a registry?

Why do you want the extra infrastructure and management of a service registry for your microservices environment? Can’t you use a properties file to externalize the URL of anything your microservice needs to consume?

Sure, this was how most external services were integrated into applications in the past. This technique provided an easy way to change the URL of external services when moving between testing and production environments.

But this approach doesn’t allow for easy scaling of an application or microservice, either up or down. With a shift toward cloud deployments, one of the biggest changes is the way such an environment is charged to an enterprise.

In the past, enterprises would have internal infrastructure to host all their applications, whether for internal or public use. The main cost with internal infrastructure is in the initial setup. When that’s complete, the ongoing hardware cost is minimal, though it does result in a larger operations cost from managing an internal infrastructure.

A migration to the cloud for most enterprises means not hosting applications on their own infrastructure but instead deploying to external hosting providers. Examples of these are Red Hat OpenShift, Google Cloud, and Amazon Web Services. These providers shift the cost away from large up-front hardware installations to regular infrastructure usage charges, usually on a monthly basis. This shift in the cost mechanism opens the door to reducing cost by scaling down an application when it’s less used.

Another upside to a scalable environment is being able to scale up when load increases without the often long hardware provision process of an enterprise. This is particularly beneficial to an enterprise that experiences extremely high load during holidays. November and December are big for most retail stores, and a scalable environment provides enterprises the ability to scale their available servers without those servers sitting idle for the remainder of the year.

The ability to quickly and easily scale up or down a particular microservice, or even a group of them, when operating in a cloud environment is a tremendous advantage to enterprise developers. It allows them to shift from the past, where they needed to anticipate increased demand to allow the time required for provisioning new hardware (which often takes months). Enterprise developers deploying to the cloud can move to a future where a new instance, or series of them, can be running and processing user load within minutes.

Being able to scale an application is tightly linked to how loosely coupled it is. As I mentioned earlier, separating URLs that an application must use into an external registry greatly decreases the amount of coupling between components, when compared to the coupling through properties files that I also mentioned earlier.

Failover of external services is a concern for all distributed architectures. Maintaining loose coupling, through a service registry, allows failover of a microservice to happen without bringing down the entire application, provided the microservice is scaled to more than a single instance!

Using a service registry in conjunction with service discovery opens the door to enabling you to handle failovers gracefully, but in their own right they aren’t the complete solution. You also need frameworks and libraries that can assist with providing fault tolerance, as you don’t want to be writing it yourself! Chapter 8 shows how fault tolerance can be incorporated into your microservices.

As in figure 7.3, here are the steps for your Payment microservice to make a call to the Stripe microservice by discovering it through a service registry; see figure 7.4:

  1. The Payment microservice requests the locations for the Stripe microservice from a known service registry.
  2. The service registry returns all the available Stripe instances.
  3. The Payment microservice sends a request to the Stripe microservice instance retrieved from the service registry.
  4. The Payment microservice receives a response from the Stripe microservice.

In figure 7.3, Payment consumed the Stripe instance running on port 8082. But in figure 7.4, you can see that the Stripe instance on port 8082 is no longer functioning when another request is processed. How it failed, we don’t know, but it’s no longer available in the service registry. That’s okay; Payment will contact the service registry for instances of Stripe and will choose the one running on port 8083 from the two that are available.

Figure 7.4. Service discovery with failed microservice

This sounds fantastic! You can scale microservices up or down as you please, within the limits of how the environment performs scaling, without worrying about how clients can find them.

Without a service registry providing metadata about Stripe to Payment, your microservice wouldn’t have any way to insulate itself from failovers or migrations, or a way to recover from them. A service registry is good for more than just getting a new live instance if one has failed. It also handles migrating a microservice to a different environment by hiding from Payment where Stripe is actually running until Payment needs that information.

You can easily create new instances for Stripe in a completely different environment from the existing ones, but still have them available within the same service registry. After the new instances are active, if you were migrating, you could scale down the old instances to shut them down—all without any impact to Payment needing to consume Stripe.

7.1.3. Stateless vs. stateful microservices

Being able to scale microservices at will is most certainly fantastic, but there’s a catch. So far, you’ve been implicitly dealing with stateless microservices, in that the microservice doesn’t retain any data within itself between requests.

What about your state? Microservice development is heavily focused on statelessness. This is a key ability for microservices to be scaled up and down without any concern for user state from previous requests.

To support scaling of microservices, it’s not possible for them to be stateful—at least not in the same way that stateful session beans were in Java EE. As the often used saying goes, we want our microservices to be more like cattle and less like pets. Better to have many that can come or go without impact (cattle), instead of a few from which one disappearing can cause major issues (pets). You can still use user data from a previous request in your microservice, but it has to have been stored somewhere for you to retrieve it.

The shift to more stateless services has already begun in Enterprise Java over the last five years, but the push toward microservices has made it even more prominent than before. For developers and architects, it’s no simple feat to switch from thinking in terms of state to stateless. The change requires additional thought up front, and during development, to prevent state from creeping into a microservice.

If there’s a service you already have that’s stateful, and there’s simply no way to break it down into stateless microservices, or the challenge in doing so poses a risk that’s too great, then microservices might not be the best approach. Stick with a more traditional Java EE application server to handle the stateful service and scaling of that service across a small cluster.

7.1.4. What is Netflix Ribbon?

Earlier we talked about load balancing across multiple instances of a single service, and how it wasn’t a good idea to create complicated load balancers in your own code. What do you do if you want load balancing in your client that’s not random or round-robin?

Thorntail provides integration with Netflix Ribbon just for that purpose, saving you from having to develop the algorithms yourself. Ribbon is a client-side software load-balancing framework developed by Netflix for its internal services. It was open sourced in January 2013 as part of a suite of projects that Netflix heavily relies on for its interprocess communication of services. The primary usage for Ribbon is calling RESTful endpoints, which is why it’s a good fit for what you need when consuming Enterprise Java microservices.

Later in the chapter, I’ll show how Ribbon can use a service registry to retrieve instances. Right now, let’s focus on the load-balancing options it provides:

  • Round RobinChooses an available server from all those present in sequential order, regardless of whatever load each server may be experiencing.
  • Availability FilteringSkips any servers that are deemed to have had their “circuit tripped,” connection failures the last three times, or a high number of concurrent connections.
  • Weighted Response TimeEach server is given a weighting based on average response times, which is used to generate a range of random values representing the server. For instance, if servers A and B have a weighting of 5 and 25, respectively, the range would be 1–5 (A) and 6–30 (B). A random number is generated between 1 and the sum of all the server weights, which determines the server based on the ranges. A server with a higher weighting, or shorter response time, has a greater chance of being selected.
  • Zone Aware Round RobinParticularly useful for deployments to Amazon Web Services, where servers are distributed across availability zones. This rule chooses servers based on whether they’re in the same zone as the client, and that are available.
  • RandomPurely random distribution across available servers.

The default choice is Round Robin. If performance is critical to your microservice, Weighted Response Time would be the best choice for load balancing. It’s similar to Round Robin in its behavior, while also favoring those servers that are performing better.

This option is particularly beneficial if a server instance is performing badly to the point that the microservice environment deems it needs to be restarted. You don’t want to continue sending lots of traffic to a microservice that could be restarted at any time.

It might be unclear from what we’ve discussed so far where Ribbon fits with respect to figure 7.3. You can see in figure 7.5 that Ribbon is part of your microservice, in this case Payment, that wants to consume another microservice, Stripe. Ribbon is then responsible for interacting with a service registry, choosing which server instance from those available to use, and finally executing a request against that instance.

Figure 7.5. Service discovery with Netflix Ribbon

For Ribbon to know where the service registry is located, you need to specify a class that’s responsible for retrieving the list of available instances for a service. Which class is required depends on the service registry being used. For instance, com.netflix.niws.loadbalancer.DiscoveryEnabledNIWSServerList is the class to be used when accessing a service registry provided by Eureka through its custom client code. Eureka is a service registry developed by Netflix, but integration with Eureka isn’t available in Thorntail.

Warning

Last year, Netflix announced that it was no longer actively maintaining Ribbon. Its GitHub site (https://github.com/Netflix/ribbon) details which parts Netflix still uses and which it doesn’t. Although Ribbon isn’t actively maintained, it’s stable and production ready for most use cases. As Thorntail makes Ribbon available for consuming microservices, the Thorntail team is actively investigating alternatives to Ribbon for the long term.

7.2. Registering a microservice with Thorntail

You’ve seen how a service registry can benefit your microservices by decoupling you from the URL locations of anything you need to consume. That’s the theory. Now it’s time to see service registration and discovery in action! You’ll take a look at your options for a service registry with Thorntail, which are known as topologies, before seeing how to register a microservice so it can be discovered by others.

7.2.1. Thorntail’s topologies

Thorntail provides an abstraction over a service registry that’s referred to as a topology. What benefit does the abstraction provide? It means your client code doesn’t need to change if your microservice is moved into an environment with a different service registry implementation. The most likely use case for this is developing and testing locally against one type of service registry and then using a different one in test and production environments.

In an ideal world, you could run a like-for-like production environment on your local machine for testing, but that’s not always possible with enterprises today. Moving toward a more cloud-based infrastructure, like Kubernetes and OpenShift, combined with Linux containers, does make it easier to replicate those environments with fewer resources. But not all enterprises may ever reach such a point.

What service registry implementations, or topology types, does Thorntail offer? It offers these:

  • JGroupsJGroups is a toolkit for reliable messaging in which clusters of nodes can be created for the purpose of sending messages to each other. Thorntail is able to create a pseudo service registry by creating a cluster from every micro-service and notifying each one of new services that are available as they register themselves.
  • OpenShiftRed Hat OpenShift is a container platform using Kubernetes to manage containers. You can use an online version, install it locally into your own environment, or use it within Minishift, as you saw earlier.
  • ConsulConsul, developed by HashiCorp, is a popular service discovery framework.

How do you choose which one to use? In some cases, the choice is determined by where your microservice is being deployed. If it’s being deployed to Red Hat OpenShift, using the OpenShift topology is logical.

The JGroups topology is best used for local development on a laptop, or in CI environments for which a full-fledged service discovery implementation may not be installed. As you saw in chapter 5, you also can use Minishift to ensure that your local development environment is as close as possible to production if you’re deploying to Red Hat OpenShift.

Beyond those natural alignments, your choice depends on the requirements around service discovery and which particular implementation best fits the needs of the environment. Such a decision is usually not in the hands of developers, unless they’re part of a DevOps culture that allows each team to build its own preferred stack of technologies.

Where does a topology implementation, using Consul as an example, fit in relation to figure 7.3? Take a look at figure 7.6.

Figure 7.6. Thorntail topology integration

The topology sits between the microservice and the service registry implementation. This enables your microservice code to remain unchanged, whether you’re deployed in an environment that uses JGroups, OpenShift, or Consul!

To select one of these topology implementations for use in your microservice, you need to add a dependency of one of the following:

  • topology-jgroups
  • topology-openshift
  • topology-consul

Thorntail also provides a topology servlet, through the topology-webapp dependency, that sends server-sent events (SSEs) whenever services are registered or removed from the topology. The topology servlet works alongside any of the topology implementations from the preceding list. To see these events, add the following dependency to pom.xml:

<dependency>
  <groupId>io.thorntail</groupId>
  <artifactId>topology-webapp</artifactId>
</dependency>

After the microservice is running, either locally or in the cloud, open a browser to http://host:port/topology/system/stream to see the events showing the available instances. This allows a UI to visually represent the instances of each service that are present in the topology, as well as maintain a current list of which service instances are available for use.

7.2.2. Registering a microservice with a topology

In our example, you have the Payment and Stripe microservices. Because Payment needs to “discover” Stripe, it first must be registered.

With Thorntail, you have options for registering a microservice. Any of the approaches require only that the topology dependency you’ve chosen from the previous section be added to your application’s pom.xml.

Before you delve into the options for registering your microservices, let’s see the code for the Stripe microservice. This will aid in your understanding of what’s going on later in the chapter. For the pom.xml, you’ll focus on what dependencies you need. There are plenty more there, but it’s not necessary for understanding what’s going on:

<dependency>
  <groupId>io.thorntail</groupId>
  <artifactId>jaxrs</artifactId>
</dependency>
<dependency>
  <groupId>io.thorntail</groupId>
  <artifactId>cdi</artifactId>
</dependency>

<dependency>
  <groupId>com.stripe</groupId>
  <artifactId>stripe-java</artifactId>
  <version>5.27.0</version>
</dependency>

The first two dependencies add jaxrs and cdi capabilities and are familiar from previous examples. The last dependency provides access to the payment APIs from Stripe.

Note

Stripe (https://stripe.com) is a service offering card transaction processing for merchants and websites. A nice aspect of Stripe is the ability to use test API keys, as you’ll use in your examples, and test credit card tokens to generate particular responses from its APIs. If you’d like to set up your own Stripe account to see the data appearing in its test dashboard, replace the stripe.key value in project-defaults.yml, and the transactions will reach your own test account.

To define the Stripe microservice, first you create the Application class to provide the JAX-RS root endpoint.

Listing 7.1. StripeApplication
@ApplicationPath("/stripe")
public class StripeApplication extends Application {
}

StripeApplication is similar to the previous examples. The only point to note is that you’re setting the JAX-RS root path to be /stripe.

For deploying to OpenShift and using Thorntail topologies for service discovery, you need to create a service account that gives the topology access to OpenShift services. A service account is like a user account for services: a service can be granted or denied permissions to perform certain actions. With the fabric8 Maven plugin, this is easy enough to do with YAML files.

Listing 7.2. service-sa.yml
metadata:
  name: service        1

  • 1 Name of service account

For the topology to see the services within OpenShift, you need the view role for your microservice. Now you need to define a role binding to match the service account with that role.

Listing 7.3. service-rb.yml
metadata:
  name: view-service            1
subjects:
- kind: ServiceAccount
  name: service                 2
roleRef:
  name: view                    3

  • 1 Name of the role binding
  • 2 Service account to use for the role binding
  • 3 Role name from OpenShift to give access to service names

Now you need to associate the service account with your microservice.

Listing 7.4. deployment.yml
apiVersion: v1
kind: Deployment
metadata:
  name: ${project.artifactId}          1
spec:
  template:
    spec:
      serviceAccountName: service      2

  • 1 OpenShift deployment name
  • 2 Service account to associate with the deployment

Without settings to the contrary, the deployment name would usually be set to ${project.artifactId}. The custom deployment.yml is solely required to associate the service account with it.

@Advertise

Now let’s take a look at the JAX-RS resource that will interact with the Stripe APIs.

Listing 7.5. StripeResource
@Path("/")
@ApplicationScoped
@Advertise("chapter7-stripe")                                          1
public class StripeResource {

  @Inject
  @ConfigurationValue("stripe.key")                                    2
  private String stripeKey;

  @POST
  @Path("/charge")
  @Consumes(MediaType.APPLICATION_JSON)
  @Produces(MediaType.APPLICATION_JSON)
  public ChargeResponse submitCharge(ChargeRequest chargeRequest) {
    Stripe.apiKey = this.stripeKey;                                    3

    Map<String, Object> chargeParams = new HashMap<>();
    chargeParams.put("amount", chargeRequest.getAmount());             4
    chargeParams.put("currency", "usd");
    chargeParams.put("description", chargeRequest.getDescription());
    chargeParams.put("source", chargeRequest.getCardToken());

    Charge charge = Charge.create(chargeParams);                       5

    return new ChargeResponse()                                        6
            .chargeId(charge.getId())
            .amount(charge.getAmount());
  }
}

  • 1 Defines the name under which you want to advertise a microservice via the topology
  • 2 Inject the configuration value defined by stripe.key in project-defaults.yml.
  • 3 Set the Stripe API key onto the Stripe API main class.
  • 4 Create a Map of all the request parameters, taking them from ChargeRequest.
  • 5 Call the Stripe API to initiate a charge.
  • 6 Return a ChargeResponse containing the amount and charge ID that was received from Stripe.

So you’ve added @Advertise to your Stripe microservice, but how does that relate to the topology?

The Thorntail topology will find all the @Advertise annotations you’ve added to RESTful endpoints in your microservice code, and store each name into a file within your deployment that’s created at runtime. The topology has runtime code that’s added to your microservice deployment that will advertise those names, with appropriate host and port information indicating where the microservice is located, to whichever implementation you’ve chosen (JGroups, OpenShift, Consul) when the deployment is started. @Advertise abstracts away the need for your microservice code to know the details of how to register a microservice. You simply provide a name for it.

Note

When using Topology and deploying to OpenShift, the advertising function is essentially a NoOp because OpenShift registers all microservices with its internal DNS. The main advantage to using @Advertise on the producing microservice is that you can easily switch your topology environment without altering your code.

Topology.lookup()

You also can register services in a way that provides greater control over the timing of when a service is available, by using Topology.lookup(). Topology provides the main abstraction over each service registry implementation by offering methods for static lookup(), adding and removing listeners to be notified as services are added or removed, registering a microservice through advertise(), and retrieving all the current registry entries with asMap().

Whichever topology implementation you’ve chosen for your microservice—JGroups, OpenShift, or Consul—Topology is always available for a microservice to use directly.

Let’s say you want to use Topology to manually advertise and unadvertise a micro-service. One advantage to this approach is restricting the microservice from being added into the service registry until the RESTful endpoint is active and available to handle requests.

Listing 7.6. Topology
AdvertisementHandle handle = Topology.lookup().advertise("allevents");   1
...
handle.unadvertise();                                                    2

  • 1 Look up the Topology instance and advertise your service, retaining a handle.
  • 2 When your service is finishing, use the handle to unadvertise yourself.

7.3. Consuming a registered microservice with Thorntail

Now that you’ve registered your Stripe microservice, it’s time to develop Payment to be able to discover it so you can consume it. This section covers two approaches for service discovery. Each uses a different client library, Netflix Ribbon and RESTEasy, for different implementations of Payment.

7.3.1. Service lookup with Netflix Ribbon

To use Netflix Ribbon as your client framework, the first thing you need to do is add it as a dependency to your Maven module:

<dependency>
  <groupId>io.thorntail</groupId>
  <artifactId>ribbon</artifactId>
</dependency>

This dependency gives your microservice access to the Netflix Ribbon libraries. In addition, it integrates with whichever topology implementation you’ve chosen for your registry. This enables Netflix Ribbon to use your topology implementation for retrieving service instances for load balancing. Next you need to create an interface to represent the external microservice, Stripe, that you want to call.

Listing 7.7. StripeService
@ResourceGroup(name = "chapter7-stripe")                                  1
public interface StripeService {
  StripeService INSTANCE = Ribbon.from(StripeService.class);              2

  @TemplateName("charge")                                                 3
  @Http(                                                                  4
        method = Http.HttpMethod.POST,
        uri = "/stripe/charge",
        headers = {
                @Http.Header(
                        name = "Content-Type",
                        value = "application/json"
                )
        }
  )
  @ContentTransformerClass(ChargeTransformer.class)                       5
  RibbonRequest<ByteBuf> charge(@Content ChargeRequest chargeRequest);    6
}

  • 1 Name of the service in the Service Registry you want to call
  • 2 Creates a proxy of your interface you can use
  • 3 Identifies the method name for which Ribbon creates a template
  • 4 Defines the HTTP parameters to execute the external request for charge() including HTTP Method, URI path, and HTTP header for content type
  • 5 Defines a transformer to convert ChargeRequest into ByteBuf
  • 6 Method must return a RibbonRequest

If Stripe had more than a single RESTful endpoint that you wanted to make requests against, each method definition in the interface would require its own @TemplateName and @Http annotations to define them.

Listing 7.7 uses the annotation-based approach of Netflix Ribbon, but if you prefer a fluent API, you can use HttpResourceGroup and HttpRequestTemplate to build up an equivalent HTTP request.

Now let’s take a look at ChargeTransformer, which is responsible for converting ChargeRequest into ByteBuf.

Listing 7.8. ChargeTransformer
public class ChargeTransformer implements ContentTransformer<ChargeRequest> {
  @Override
  public ByteBuf call(ChargeRequest chargeRequest, ByteBufAllocator
     byteBufAllocator) {
    try {
      byte[] bytes = new ObjectMapper().writeValueAsBytes(chargeRequest);  1
      ByteBuf byteBuf = byteBufAllocator.buffer(bytes.length);             2
      byteBuf.writeBytes(bytes);                                           3
      return byteBuf;
    } catch (JsonProcessingException e) {
      e.printStackTrace();
    }
    return null;
  }
}

  • 1 Use an ObjectMapper to convert ChargeRequest into JSON format.
  • 2 Allocate a new ByteBuf instance with the appropriate length.
  • 3 Write the JSON as bytes into the ByteBuf.

ChargeTransformer handles the conversion only when making the request. You need to handle converting ByteBuf into a meaningful response within your calling code.

Let’s see what your Payment resource looks like when using Netflix Ribbon.

Listing 7.9. PaymentResource
@Path("/")
public class PaymentServiceResource {

  @POST
  @Path("/sync")
  @Consumes(MediaType.APPLICATION_JSON)
  @Produces(MediaType.APPLICATION_JSON)
  public ChargeResponse chargeSync(ChargeRequest chargeRequest) {
    ByteBuf buf = StripeService.INSTANCE.charge(chargeRequest).execute();  1
    return extractResult(buf);                                             2
  }

  @POST
  @Path("/async")
  @Consumes(MediaType.APPLICATION_JSON)
  @Produces(MediaType.APPLICATION_JSON)
  public void chargeAsync(@Suspended final AsyncResponse asyncResponse,
     ChargeRequest chargeRequest)
        throws Exception {
    executorService().submit(() -> {
      Observable<ByteBuf> obs =
          StripeService.INSTANCE.charge(chargeRequest).toObservable();     3
      obs.subscribe(                                                       4
            (result) -> {
                asyncResponse.resume(extractResult(result));               5
            },
            asyncResponse::resume
      );
    });
  }

  private ChargeResponse extractResult(ByteBuf result) {                   6
    byte[] bytes = new byte[result.readableBytes()];
    result.readBytes(bytes);
    try {
      return new ObjectMapper()                                            7
                .readValue(bytes, ChargeResponse.class);
    } catch (IOException e) {
      e.printStackTrace();
    }

    return null;
  }
}

  • 1 Call Stripe synchronously.
  • 2 Extract the result and return it.
  • 3 Create an Observable to call Stripe asynchronously.
  • 4 Subscribe to the Observable, passing success and failure methods.
  • 5 Extract the ChargeResponse from the result and set it on the AsyncResponse.
  • 6 Convert a ByteBuf into a ChargeResponse.
  • 7 Use an ObjectMapper to convert bytes of JSON into a ChargeResponse instance.

Let’s see how this all works!

First you need to have Minishift running (see chapter 5 for details) and be logged into the OpenShift client. Next you need to run the Stripe microservice; to do that, change into the /chapter7/stripe directory and run this:

mvn clean fabric8:deploy -Popenshift -DskipTests

With the Stripe microservice now running, change into the /chapter7/ribbon-client directory and run this:

mvn clean fabric8:deploy -Popenshift -DskipTests

The URL of the service is the URL in the OpenShift console for the chapter7-ribbon-client service, with /sync or /async added to the end.

Because you need to issue an HTTP POST request on either of these URLs, the process is a bit more complicated than just opening a browser and entering the URL. Many tools can be used for issuing the request you need, including curl on the command line, but you’ll use Postman, shown in figure 7.7.

Figure 7.7. Postman calling the Ribbon client service

Note

Postman has a lot of functionality, across a few versions, but at its core it provides the ability to test API endpoints. Most important, for me, it offers the ability to save requests, including headers and body content, so that the same request can be repeated whenever you need it. For further details, take a look at www.getpostman.com.

Here you can see the request details—including the body of the HTTP POST in the top half, and the response you received from the service at the bottom.

The most important header to set is Content-Type with a value of application/json. If you don’t set that header, JAX-RS doesn’t believe it’s receiving JSON and will reject the request. You would receive an HTTP response code of 415, indicating an unsupported media type.

To see the topology, you can install the topology-webapp dependency into ribbon-client to see all the registration events. Modify the pom.xml to include the following:

<dependency>
  <groupId>io.thorntail</groupId>
  <artifactId>topology-webapp</artifactId>
</dependency>

Then from the /chapter7/ribbon-client directory, run this:

mvn clean fabric8:deploy -Popenshift -DskipTests

In the OpenShift console, click the URL for the ribbon-client microservice. Then add /topology/system/stream to the end of the URL in the browser window. The browser will immediately show the event that registered both your microservices, chapter7-stripe and chapter7-ribbon-client, with the topology:

event: topologyChange
data: {
  "chapter7-stripe": [
    {
      "endpoint": "http://chapter7-stripe:8080",
      "tags":["http"]
    }
  ],
  "chapter7-ribbon-client": [
    {
      "endpoint": "http://chapter7-ribbon-client:8080",
      "tags":["http"]
    }
  ]
}

One thing you’ll notice about the URLs for each of the microservices is that they don’t include the usual IP address and nip.io suffix of OpenShift URLs. These URLs are internal OpenShift URLs; they won’t work when used outside the OpenShift environment.

7.3.2. Service lookup with the RESTEasy client

Apart from using different client frameworks for calling the Stripe microservice, with RESTEasy you’re going to use the Topology.lookup method for retrieving information from Topology. You need to do that because RESTEasy doesn’t have a way to perform the lookup for you as Ribbon does.

To use RESTEasy as your client framework, the first thing you need to do is add it as a dependency to your Maven module:

<dependency>
  <groupId>org.jboss.resteasy</groupId>
  <artifactId>resteasy-client</artifactId>
  <version>3.0.24.Final</version>
  <scope>provided</scope>
</dependency>

You mark it as provided because it’s on the classpath from Thorntail but you need it defined for local compilation. Next you need to create an interface to represent the external microservice, Stripe, that you want to call.

Listing 7.10. StripeService
@Path("/stripe")
public interface StripeService {

    @POST
    @Path("/charge")
    @Consumes(MediaType.APPLICATION_JSON)
    @Produces(MediaType.APPLICATION_JSON)
    ChargeResponse charge(ChargeRequest chargeRequest);

}

As you can see, this code is a lot simpler and easier to comprehend than the Ribbon equivalent. Let’s see what your Payment resource looks like when using RESTEasy.

Listing 7.11. MessageResource
@Path("/")
public class PaymentServiceResource {
  private Topology topology;

  public PaymentServiceResource() {
    try {
      topology = Topology.lookup();                                   1
    } catch (NamingException e) {
      e.printStackTrace();
    }
  }

  @POST
  @Path("/sync")
  @Consumes(MediaType.APPLICATION_JSON)
  @Produces(MediaType.APPLICATION_JSON)
  public ChargeResponse chargeSync(ChargeRequest chargeRequest) throws
Exception {
    ResteasyClient client = new ResteasyClientBuilder().build();
    URI url = getService("chapter7-stripe");                          2
    ResteasyWebTarget target = client.target(url);
    StripeService stripe = target.proxy(StripeService.class);
    return stripe.charge(chargeRequest);
  }

  ...

  private URI getService(String name) throws Exception {
    Map<String, List<Topology.Entry>> map = this.topology.asMap();    3

    if (map.isEmpty()) {
      throw new Exception("Service not found for '" + name + "'");
    }

    Optional<Topology.Entry> seOptional = map
            .get(name)
            .stream()
            .findFirst();                                             4

    Topology.Entry serviceEntry =
        seOptional.orElseThrow(                                       5
          () -> new Exception("Service not found for '" + name + "'")
        );

    return new URI("http", null, serviceEntry.getAddress(),
serviceEntry.getPort(), null, null, null);
  }
}

  • 1 On creation of PaymentServiceResource, retrieve the Topology instance.
  • 2 Retrieve a URI for the chapter7-stripe service.
  • 3 Get the Service Registry to find the service you need.
  • 4 For a list of registrations for the chapter7-stripe service, find the first one.
  • 5 If the Optional is empty, throw an exception that a service couldn’t be found.

You likely noticed the extra work you had to do with looking up Topology by calling Topology.lookup(), which wasn’t required when using Netflix Ribbon as a client. Netflix Ribbon performs the service lookup based on the @ResourceGroup name, directly interacting with Topology to retrieve the information it needs.

As you can see when retrieving a topology entry from the map, you’re finding only the first URI for a given service, because you’re not load balancing across possibly multiple instances. With OpenShift, it’s not necessary to load balance on the client side, because DNS in OpenShift will perform this task for you on the server.

If you’re deploying to a different environment, it’s likely that for a production situation you’d want to use an algorithm, or a variety of algorithm options, to choose which service instance to consume. With the Stripe microservice running from earlier, change into the /chapter7/resteasy-client directory and run this:

mvn clean fabric8:deploy -Popenshift -DskipTests

As with the Ribbon example, the URL of the service is the URL in the OpenShift console for the chapter7-resteasy-clientservice, with /sync or /async added to the end. Once again, to test the endpoints, you need a tool (either Postman or whatever you prefer) to execute the POST request. If all has gone well, you should receive a similar response to the Ribbon example when executing the requests.

Summary

  • Code that includes locations of microservices to consume is prone to failures as instances come up and down. Failures can also occur when requiring updates to code or configuration when microservices move locations, and redeploying those changes across any impacted environment. Service discovery provides the separation you need for your microservices to scale without relying on IP addresses directly.
  • To be able to discover services to consume, you need them to be registered in a central place so your microservice can retrieve them. A service registry fulfills that role in a microservice environment.
  • Thorntail allows you to use JGroups, OpenShift, or Consul as a service registry implementation in your microservice environments.
  • Using a Netflix Ribbon client in your microservice removes lookups from your client code while allowing you to take advantage of Thorntail topology implementations for service discovery.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.15.239.214