© The Author(s), under exclusive license to APress Media, LLC, part of Springer Nature 2022
D. R. HeffelfingerPayara Micro Revealedhttps://doi.org/10.1007/978-1-4842-8161-1_7

7. High Availability and Fault Tolerance

David R. Heffelfinger1  
(1)
Fairfax, VA, USA
 

When developing an application using a microservices architecture, we typically have a number of services that depend on one another. It is possible that one or more of the services may go down, possibly bringing the whole system down with them. The MicroProfile Fault Tolerance API provides functionality we can use to mitigate this risk, providing several annotations we can use to configure how the system behaves when one or more microservices are not available or otherwise not working properly.

Asynchronously Calling RESTful Web Service Endpoints

MicroProfile Fault Tolerance provides the @Asynchronous annotation, which allows RESTful web service endpoints to be invoked asynchronously. This allows the client to continue processing and not block when invoking a service that may take a long time. This is especially useful when the client has to call multiple services that may each take a while to return; by invoking them asynchronously, these services can execute in parallel, reducing the time the client would have to wait to get the results.

The @Asynchronous annotation can be applied to methods in request scoped CDI beans; the method must return an implementation of either Future or CompletionStage interfaces. A return type of CompletionStage is preferred as in this case any other fault tolerance annotations applied to the method will still be applied if the invoked method throws an exception; this is not the case with methods returning Future.

Recall that we can turn any RESTful web service into a CDI bean simply by applying one of the scope annotations, such as @RequestScoped.

The following example illustrates how we can indicate that our endpoint may be called asynchronously:
package com.ensode.faulttolerance;
//imports omitted
@RequestScoped
@Path("faulttoleranceexample")
public class FaulToleranceExampleResource {
  @Asynchronous
  @GET
  @Path("async")
  @Produces(MediaType.TEXT_PLAIN)
  public CompletionStage<Integer> getAsynchronousValue()
    throws InterruptedException {
    TimeUnit.SECONDS.sleep(5);
    return CompletableFuture.completedStage(18);
  }
  @Asynchronous
  @GET
  @Path("async2")
  @Produces(MediaType.TEXT_PLAIN)
  public CompletionStage<Integer> getAnotherAsynchronousValue()
    throws InterruptedException {
    TimeUnit.SECONDS.sleep(7);
    return CompletableFuture.completedStage(24);
  }
}

As we can see in the example, all we have to do is annotate any methods to be invoked asynchronously with @Asynchronous and make sure the method returns either a Future or a CompletionStage; when a client invokes our asynchronous methods, control goes back immediately to the client; the client won’t block waiting for a result.

When using the MicroProfile REST client API to develop our client, we apply the @Asynchronous annotations used on the service to our rest client interface methods.
package com.ensode.fault.toleranceclient;
//imports omitted
@RegisterRestClient
@Path("faulttoleranceexample")
public interface FaultToleranceExampleResourceClient {
  @Asynchronous
  @GET
  @Path("async")
  @Produces(MediaType.TEXT_PLAIN)
  public CompletionStage<Integer> getAsynchronousValue()
    throws InterruptedException;
  @Asynchronous
  @GET
  @Path("async2")
  @Produces(MediaType.TEXT_PLAIN)
  public CompletionStage<Integer> getAnotherAsynchronousValue()
    throws InterruptedException;
}
Then the service acting as a client for the asynchronous methods would use our client interface as usual.
package com.ensode.fault.toleranceclient;
//imports omitted
@Path("/faulttoleranceclient")
public class FaultToleranceClientService {
  @Inject
  @RestClient
  private FaultToleranceExampleResourceClient client;
  @GET
  @Produces(MediaType.TEXT_PLAIN)
  public String get() throws InterruptedException,
    ExecutionException {
    Integer answer;
    Integer value1;
    Integer value2;
    String retVal;
    CompletionStage<Integer> asynchronousValue =
      client.getAsynchronousValue();
    CompletionStage<Integer> asynchronousValue2 =
      client.getAnotherAsynchronousValue();
    value1 = asynchronousValue.toCompletableFuture().get();
    value2 = asynchronousValue2.toCompletableFuture().get();
    answer = value1 + value2;
    retVal = String.format("The answer is %d ", answer);
    return retVal;
  }
}

One of the invoked methods takes approximately five seconds to return a value; the other one takes approximately seven seconds. Had we called this method synchronously, the client would have had to wait approximately 12 seconds to obtain both results; since we are calling them asynchronously (they run in parallel), the client only has to block for approximately seven seconds to obtain both results.

Limit Concurrent Execution to Avoid Overloading the System

MicroProfile Fault Tolerance provides the @Bulkhead annotation, named after the Bulkhead design pattern, since it allows us to implement this pattern with minimal effort. @Bulkhead can be used to specify the maximum number of concurrent instances for a RESTful web service endpoint.

There are two ways to use the @Bulkhead annotation: the semaphore style and the thread pool style.

Using Semaphores for Synchronous Endpoints

When using the semaphore style, the value argument of the annotation indicates the maximum number of concurrent invocations to a microservice endpoint; any additional attempts for concurrent invocations result in a BulkheadException.

The following example illustrates the semaphore style usage of the @Bulkhead annotation:
package com.ensode.faulttolerance;
//imports omitted
@RequestScoped
@Path("faulttoleranceexample")
public class FaulToleranceExampleResource {
  @Inject
  private ConcurrentInvocationCounter concurrentInvocationCounter;
  @POST
  @Path("semaphorebulkhead")
  @Bulkhead(3)
  @Produces(MediaType.TEXT_PLAIN)
  public String semaphoreBulkHeadDemo() throws
    InterruptedException {
    String retVal;
    concurrentInvocationCounter.increaseCounter();
    retVal = String.format(
      "There are %d concurrent invocations to this endpoint ",
       concurrentInvocationCounter.getCounter());
    TimeUnit.SECONDS.sleep(3);
    concurrentInvocationCounter.decreaseCounter();
    return retVal;
  }
}

In our example, we are allowing up to three concurrent invocations to our RESTful web service endpoint; any attempt to invoke the endpoint while there are already three concurrent invocations will fail with a BulkheadException; once the number of concurrent invocations decreases below three, the method can be successfully invoked again.

The following example code illustrates what happens when we generate more concurrent calls than allowed by the @Bulkhead annotation:
package com.ensode.fault.toleranceclient;
//imports omitted
@Path("/faulttoleranceclient")
public class FaultToleranceClientService {
  private static final Logger LOGGER =
    Logger.getLogger(FaultToleranceClientService.class.getName());
  @Inject
  @RestClient
  private FaultToleranceExampleResourceClient client;
  @POST
  @Path("semaphorebulkhead")
  public void semaphoreBulkheadClient() throws
    InterruptedException {
    ExecutorService executorService =
      Executors.newFixedThreadPool(4);
    Callable<String> semaphoreBulkheadCallable =
      () -> client.semaphoreBulkHeadDemo();
    List<Future<String>> callResults = executorService.invokeAll(
      List.of(semaphoreBulkheadCallable,semaphoreBulkheadCallable,
        semaphoreBulkheadCallable,semaphoreBulkheadCallable));
    callResults.forEach(fut -> {
      try {
        LOGGER.log(Level.INFO, fut.get());
      } catch (InterruptedException | ExecutionException ex) {
        LOGGER.log(Level.SEVERE, String.format(
          "%s caught", ex.getClass().getName()), ex);
      }
    });
  }
}

With a little help of the Concurrency Utilities API, we spawn four threads, each of which invokes the endpoint on our service annotated with @Bulkhead annotation. Since we specified a maximum of three concurrent invocations, the last invocation will fail with a @BulkheadException, as expected.

Typically, there would be multiple concurrent clients making concurrent requests to a service; for simplicity, our example uses a single client generating multiple concurrent invocations to the service.

After running our client service, if we take a look at the Payara Micro output, we can verify that the @Bulkhead annotation is working as expected.
2021-10-18T18:20:18.284-0400] [] [INFO] [] [javax.enterprise.system.core] [tid: _ThreadID=89 _ThreadName=payara-executor-service-scheduled-task] [timeMillis: 1634595618284] [levelValue: 800] mp-fault-tolerance-example-client-1.0-SNAPSHOT was successfully deployed in 428 milliseconds.
[2021-10-18T18:32:40.472-0400] [] [INFO] [] [com.ensode.fault.toleranceclient.FaultToleranceClientService] [tid: _ThreadID=83 _ThreadName=http-thread-pool::http-listener(1)] [timeMillis: 1634597555867] [levelValue: 800] There are 1 concurrent invocations to this endpoint
[2021-10-18T18:32:40.472-0400] [] [INFO] [] [INFO] [] [com.ensode.fault.toleranceclient.FaultToleranceClientService] [tid: _ThreadID=83 _ThreadName=http-thread-pool::http-listener(1)] [timeMillis: 1634597555867] [levelValue: 800] There are 3 concurrent invocations to this endpoint
[2021-10-18T18:32:40.472-0400] [] [INFO] [[] [INFO] [] [com.ensode.fault.toleranceclient.FaultToleranceClientService] [tid: _ThreadID=83 _ThreadName=http-thread-pool::http-listener(1)] [timeMillis: 1634597555867] [levelValue: 800] There are 2 concurrent invocations to this endpoint
[2021-10-18T18:32:40.472-0400] [] [SEVERE] [] [com.ensode.fault.toleranceclient.FaultToleranceClientService] [tid: _ThreadID=83 _ThreadName=http-thread-pool::http-listener(1)] [timeMillis: 1634596360472] [levelValue: 1000] [[
  java.util.concurrent.ExecutionException caught
java.util.concurrent.ExecutionException: org.eclipse.microprofile.faulttolerance.exceptions.BulkheadException: No free work or queue space.
<intermediate stack trace entries removed for brevity>
Caused by: org.eclipse.microprofile.faulttolerance.exceptions.BulkheadException: No free work or queue space.

By examining the output for Payara Micro, we can see that the first three concurrent invocations succeeded; the fourth one generated a BulkheadExcetpion, as expected.

We have no control over thread scheduling; it is instead done by the JVM and/or the underlying operating system; for this reason, the output indicating the number of concurrent invocations may not match our expectations; the line of code sending output to the log file for the third thread was executed before the same line on the second thread; that’s why we see log entries indicating 1, 3, and 2 concurrent invocations, as opposed to 1, 2, and 3.

Using Thread Pools for Asynchronous Endpoints

Thread pool style @Bulkhead usage is limited to asynchronous endpoints; as before, we specify the maximum number of concurrent calls allowed for an endpoint via the value attribute of the @Bulkhead annotation; any additional invocations are placed in a thread pool and serviced after the maximum number of concurrent invocations to our endpoint decreases below the specified value. We can specify the maximum number of invocations on the thread pool via the waitingTaskQueue attribute of the @Bulkhead annotation.

The following example illustrates thread pool style @Bulkhead usage:
package com.ensode.faulttolerance;
//imports omitted
@RequestScoped
@Path("faulttoleranceexample")
public class FaulToleranceExampleResource {
  @Inject
  private ConcurrentInvocationCounter concurrentInvocationCounter;
  @POST
  @Path("threadpoolbulkhead")
  @Asynchronous
  @Bulkhead(value = 3, waitingTaskQueue = 2)
  @Produces(MediaType.TEXT_PLAIN)
  public CompletionStage<String> threadPoolBulkheadExample(
    @QueryParam("invocationNum") int invocationNum) throws
    InterruptedException {
    String retVal;
    retVal = String.format("Invocation number %d succeeded ",
            invocationNum);
    TimeUnit.SECONDS.sleep(3);
    return CompletableFuture.completedStage(retVal);
  }
}

As indicated by the @Asynchronous annotation, our endpoint is asynchronous; therefore, we can use @Bulkhead thread pool style. In our example, we allow up to three concurrent invocations, as indicated by the value attribute of @Bulkhead; we are also allowing a maximum of two threads in the pool, indicated in the waitingTaskQueue attribute of the annotation.

Generating more than five concurrent requests (the three that are allowed plus the maximum of two allowed in the thread pool) should result in an exception. The following bash script generates six concurrent requests:
#!/bin/bash
for i in {1..6}; do
  curl -i -XPOST http://localhost:8080/faulttolerance/webresources/faulttoleranceexample/threadpoolbulkhead?invocationNum=$i &
done
Executing the preceding script from a bash shell results in the following output:
HTTP/1.1 500 Request failed.
Server: Payara Micro #badassfish
Connection: close
Content-Length: 0
X-Frame-Options: SAMEORIGIN
HTTP/1.1 200 OK
Server: Payara Micro #badassfish
Content-Type: text/plain
Content-Length: 31
X-Frame-Options: SAMEORIGIN
Invocation number 4 succeeded
HTTP/1.1 200 OK
Server: Payara Micro #badassfish
Content-Type: text/plain
Content-Length: 31
X-Frame-Options: SAMEORIGIN
Invocation number 3 succeeded
HTTP/1.1 200 OK
Server: Payara Micro #badassfish
Content-Type: text/plain
Content-Length: 31
X-Frame-Options: SAMEORIGIN
Invocation number 2 succeeded
HTTP/1.1 200 OK
Server: Payara Micro #badassfish
Content-Type: text/plain
Content-Length: 31
X-Frame-Options: SAMEORIGIN
Invocation number 5 succeeded
HTTP/1.1 200 OK
Server: Payara Micro #badassfish
Content-Type: text/plain
Content-Length: 31
X-Frame-Options: SAMEORIGIN
Invocation number 6 succeeded

Interestingly enough, the output shows the first request failed and requests 2 to 5 succeeded; this happens because we have no control of thread scheduling; in this case, the first request was processed last, which is why it failed; it also failed immediately, as opposed to the others that took three seconds to process; that is why, even though the first request was executed last, we see its output first.

Stop Invoking Repeatedly Failing Endpoints

The @CircuitBreaker annotation allows us to stop invoking endpoints that fail past a threshold; by default, if at least 50% of the last 20 invocations to an endpoint fail, then invocations to that method will stop; instead, the MicroProfile runtime will throw a CircuitBreakerOpenException. We can specify the number of invocations used and the ratio of failed requests via the requestVolumeThreshold and failureRatio attributes of @CircutiBreaker, respectively

After a specified delay (defaulting to 500 milliseconds), the circuit is half opened, meaning that number of requests (default of one request) from are allowed to pass through and invoke the operation; if these requests are successful, the circuit is automatically closed, allowing further requests to access the endpoint; if at least one of these requests fails, the circuit is reopened, preventing any further calls to the endpoint. We can specify the delay to use before half-opening the circuit via the delay attribute of @CircuitBreaker and the unit of time for the delay via its delayUnit attribute; the default unit of time is milliseconds.

The following example illustrates how to use the @CircuitBreaker annotation:
package com.ensode.faulttolerance;
//imports omitted
@RequestScoped
@Path("faulttoleranceexample")
public class FaulToleranceExampleResource {
  @Inject
  private ConcurrentInvocationCounter concurrentInvocationCounter;
  @CircuitBreaker(requestVolumeThreshold = 3, failureRatio = .66
    delay = 1, delayUnit = ChronoUnit.SECONDS, successThreshold = 2)
  @POST
  @Produces(MediaType.TEXT_PLAIN)
  @Path("circuitbreaker")
  public String circuitBreakerExample(@QueryParam("success") boolean success) {
    if (success == false) {
      throw new RuntimeException("forcing a failure for demo purposes");
    } else {
      return "Call succeeded";
    }
  }
}

For illustration purposes, in our example, we are deliberately throwing an exception when we receive a value of false as a query parameter. In our endpoint, we are using the last three invocations to our endpoint to calculate the threshold, as specified by the requestVolumeThreshold attribute to @CircuitBreaker. We specify the ratio of failed calls as .66 via the failureRatio attribute; in this case, since our request volume threshold is 3 and the failure ratio is .66, the circuit will open after two out of three consecutive calls fail.

In our example, we are specifying a delay of one second before we half-open the circuit; the number of seconds is specified in the delay attribute of @CircuitBreaker and the unit of time in the corresponding delayUnit attribute.

Let’s now write a client service that sends multiple requests to our endpoint so that we can verify that the @CircuitBreaker annotation is working as expected.
package com.ensode.fault.toleranceclient;
//imports omitted
@Path("/faulttoleranceclient")
public class FaultToleranceClientService {
  private static final Logger LOGGER = Logger.getLogger(FaultToleranceClientService.class.getName());
  @Inject
  @RestClient
  private FaultToleranceExampleResourceClient client;
  @POST
  @Path("circuitbreaker")
  public void circuitBreakerClient() throws InterruptedException {
    try {
      LOGGER.log(Level.INFO, client.circuitBreakerExample(true));
    } catch (RuntimeException re) {
      LOGGER.log(Level.SEVERE, re.getMessage());
    }
    try {
      LOGGER.log(Level.INFO, client.circuitBreakerExample(false));
    } catch (RuntimeException re) {
      LOGGER.log(Level.SEVERE, re.getMessage());
    }
    try {
      LOGGER.log(Level.INFO, client.circuitBreakerExample(false));
    } catch (RuntimeException re) {
      LOGGER.log(Level.SEVERE, re.getMessage());
    }
    //circuit opens
    try {
      LOGGER.log(Level.INFO, client.circuitBreakerExample(true));
      //call fails because the circuit is open
    } catch (CircuitBreakerOpenException e) {
      LOGGER.log(Level.SEVERE, "Circuit breaker is open", e);
    }
    //Wait one second, circuit is now half open, call succeeds.
    TimeUnit.SECONDS.sleep(1);
    try {
      LOGGER.log(Level.INFO, client.circuitBreakerExample(true));
      //call succeeds because the circuit is half open
    } catch (RuntimeException re) {
      LOGGER.log(Level.SEVERE, re.getMessage());
    }
    //circuit breaker is now closed
  }
}

Recall that in our service, we are deliberately throwing an exception when it receives a value of false for its success path parameter; as such, the first invocation to the service succeeds, and the next two fail; since we met the threshold of failed requests, the circuit opens.

The next invocation to the service failed, even though we are passing a value of true; the reason it fails is because the circuit breaker is now open. After the specified delay of one second, the circuit breaker is half open; therefore, calls are allowed to reach the endpoint; the last call from our client succeeds. Since we specified a success threshold of 2 via the successThreshold attribute of @CircuitBreaker, at this point, the circuit breaker remains in half-open state; if the next request succeeds, the circuit is then closed; if it fails, it will be opened.

One last thing to mention before moving on, by default, @CircuitBreaker will increase the counter of failed calls if any exception is thrown from the endpoint. If we wish to limit the circuit breaker functionality to a certain set of exceptions, we can do so via the failOn attribute of @CircuitBreaker; this attribute accepts an array of child classes of Exception as its value, for example:
@CircuitBreker(failOn={ReallyBadException.class, EvenWorseException.class})
Similarly, we can specify exceptions to ignore via the skipOn attribute of @CircuitBreaker.
@CircuitBreaker(skipOn={DumbException.class, IsThisEvenAnIssueException.class})

Providing an Alternative Solution When Execution Fails

By default, any exceptions thrown from RESTful web service endpoints result in an exception being thrown in the client; however, we can gracefully recover from errors via the @Fallback annotation. When using @FallBack, we can specify an alternate method that will be executed instead of the failing endpoint; this method can be specified via the fallbackMethod attribute of @Fallback, as illustrated in the following example:
package com.ensode.faulttolerance;
//imports omitted
@RequestScoped
@Path("faulttoleranceexample")
public class FaulToleranceExampleResource {
  @Fallback(fallbackMethod = "fallbackMethod")
  @POST
  @Produces(MediaType.TEXT_PLAIN)
  @Path("fallback")
  public String fallbackExample(@QueryParam("success") boolean success) {
    if (success == false) {
      throw new RuntimeException(
      "forcing a failure for demo purposes");
    } else {
      return "Call succeeded";
    }
  }
  private String fallbackMethod(boolean success) {
    return "Something went wrong";
  }
}

To specify a fallback method, we simply indicate the method name as a String as the value of the fallbackMethod attribute of @Fallback; the fallback method must have the same return type and parameters as the potentially failing endpoint; notice in our example, both the method implementing the endpoint and the fallback method return a String and take a single boolean as a parameter; in general, the return value and the parameter types of the endpoint and corresponding fallback method must match.

If the endpoint invocation succeeds, then nothing out of the ordinary happens; if it fails, then the fallback method is invoked instead. For instance, if we send a request to the endpoint we defined in our example and that method fails, we will instead get the output from the fallback method.
$ curl -XPOST http://localhost:8080/faulttolerance/webresources/faulttoleranceexample/fallback?success=false
Something went wrong

Using a fallback method is great if we wish to implement alternate functionality if our endpoint ends; however, what if instead we would like to get some diagnostic information about the failing endpoint? In that case, we can indicate a fallback handler class via the value attribute of @Fallback. A fallback handler must be a class implementing the FallbackHandler interface; this interface has a single abstract method called handle() ; this method accepts an instance of ExecutionContext as a parameter; we can use this parameter to obtain information about the failing method.

The following example illustrates how to implement a fallback handler:
package com.ensode.faulttolerance;
//imports omitted
@Dependent
public class ExampleFallbackHandler
  implements FallbackHandler<String> {
  private static final Logger LOGGER =
    Logger.getLogger(ExampleFallbackHandler.class.getName());
  @Override
  public String handle(ExecutionContext ec) {
    Throwable throwable = ec.getFailure();
    Method buggyMethod = ec.getMethod();
    Object[] parameters = ec.getParameters();
    LOGGER.log(Level.SEVERE, String.format(
      "%s thrown when invoking %s method with parameters: %s",
            throwable.getClass().getName(), buggyMethod.getName(),
             Arrays.asList(parameters)));
    return "Something went wrong, check the logs ";
  }
}

Our FallbackHandler implementation must be a CDI bean; the easiest way to turn it into one is to use one of the scope annotations; for FallbackHandler implementations, it is usually a good idea to use the @Dependent pseudoscope, which will cause the FallbackHandler implementation to use the same scope as the RESTful web service utilizing it.

The FallbackHandler interface has a generic type argument; its abstract handle() method returns the type we specify, which, in our example, is String; its handle() method is automatically invoked by the MicroProfile runtime; it passes an instance of a class implementing the ExecutionContext interface, which we can use to obtain information about the failed invocation. As shown in the example, we can invoke its getFailure() method to obtain the Throwable (typically an Exception) that was thrown from the endpoint.

Additionally, we can invoke ExecutionContext.getMethod() to obtain a reference to the method that failed; we can obtain the method name as a String by invoking its getName() method.

Finally, we can obtain the arguments that were passed to the failing endpoint by invoking the getParameters() method on ExecutionContext.

Our example sends this information to the Payara Micro log and returns a String to the client.

In order to use our FallbackHandler implementation, we need to specify it as the value of the @Fallback annotation in the method implementing the endpoint, as illustrated in the following example:
package com.ensode.faulttolerance;
//imports omitted
@RequestScoped
@Path("faulttoleranceexample")
public class FaulToleranceExampleResource {
  @Fallback(ExampleFallbackHandler.class)
  @POST
  @Produces(MediaType.TEXT_PLAIN)
  @Path("fallbackhandler")
  public String fallbackHandlerExample(
    @QueryParam("success") boolean success) {
    if (success == false) {
      throw new RuntimeException(
      "forcing a failure for demo purposes");
    } else {
      return "Call succeeded";
    }
  }
}

As seen in the example, all we have to do to defer to a FallbackHandler implementation in case of failure is to add the FallbackHandler class as the value of the @Fallback annotation.

If a call to the endpoint fails, our FallbackHandler implementation takes over.
$ curl -XPOST http://localhost:8080/faulttolerance/webresources/faulttoleranceexample/fallbackhandler?success=false
Something went wrong, check the logs
If we check the Payara Micro output or log file, we can see the information we retrieved from ExecutionContext, which can be used to diagnose and correct the issue.
[2021-10-20T14:00:27.797-0400] [] [SEVERE] [] [com.ensode.faulttolerance.ExampleFallbackHandler] [tid: _ThreadID=77 _ThreadName=http-thread-pool::http-listener(2)] [timeMillis: 1634752827797] [levelValue: 1000] java.lang.RuntimeException thrown when invoking fallbackHandlerExample method with parameters: [false]
By default, @Falback takes over if any exception is thrown from the endpoint. If we wish to limist the fallback functionality to a certain set of exceptions, we can do so via the applyOn attribute of @Fallback; this attribute accepts an array of child classes of Exception as its value, for example:
@Fallback(SomeFallBackHandler.class, failOn={ReallyBadException.class, EvenWorseException.class})
Similarly, we can specify exceptions to ignore via the skipOn attribute of @Fallback.
@FallBack(fallbackMethod="someMethod", skipOn={DumbException.class, IsThisEvenAnIssueException.class})

Retrying Execution in Case of Failure

We can use the @Retry annotation to automatically retry an endpoint invocation that fails, as illustrated in the following example:
package com.ensode.faulttolerance;
//imports omitted
@RequestScoped
@Path("faulttoleranceexample")
public class FaulToleranceExampleResource {
  private static final Logger LOGGER = Logger.getLogger
    (FaulToleranceExampleResource.class.getName());
  @Inject
  private EndpointSuccessDeterminator endpointSuccessDeterminator;
  @Retry
  @GET
  @Produces(MediaType.TEXT_PLAIN)
  @Path("retry")
  public String retryExample() {
    LOGGER.log(Level.INFO, "retryExample() invoked");
    boolean success;
    success = endpointSuccessDeterminator.
      allowEndpointToSucceed();
    if (!success) {
      LOGGER.log(
        Level.SEVERE, "retryExample() invocation failed");
      throw new RuntimeException(
        "forcing a failure for demo purposes");
    } else {
      LOGGER.log(Level.INFO,
        "retryExample() invocation succeeded");
      return "Call succeeded ";
    }
  }
}
In our example, we are using an application scoped CDI bean to force our endpoint to fail every other invocation; the bean simply returns a boolean value and flips the value of the boolean after each invocation.
package com.ensode.faulttolerance;
import javax.enterprise.context.ApplicationScoped;
@ApplicationScoped
public class EndpointSuccessDeterminator {
  private boolean successIndicator = true;
  public boolean allowEndpointToSucceed() {
    successIndicator = !successIndicator;
    return successIndicator;
  }
}

We use this bean in our endpoint to force a failure so that we can demonstrate the @Retry annotation in action.

If we send an HTTP GET request to our endpoint, nothing looks out of the ordinary.
$ curl http://localhost:8080/faulttolerance/webresources/faulttoleranceexample/retry
Call succeeded
However, if we look at the Payara Micro output, we can see that the first invocation failed, it was automatically retried, and the second invocation succeeded.
[2021-10-21T14:25:22.034-0400] [] [INFO] [] [com.ensode.faulttolerance.FaulToleranceExampleResource] [tid: _ThreadID=78 _ThreadName=http-thread-pool::http-listener(2)] [timeMillis: 1634840722034] [levelValue: 800] retryExample() invoked
[2021-10-21T14:25:22.035-0400] [] [SEVERE] [] [com.ensode.faulttolerance.FaulToleranceExampleResource] [tid: _ThreadID=78 _ThreadName=http-thread-pool::http-listener(2)] [timeMillis: 1634840722035] [levelValue: 1000] retryExample() invocation failed
[2021-10-21T14:25:22.130-0400] [] [INFO] [] [com.ensode.faulttolerance.FaulToleranceExampleResource] [tid: _ThreadID=78 _ThreadName=http-thread-pool::http-listener(2)] [timeMillis: 1634840722130] [levelValue: 800] retryExample() invoked
[2021-10-21T14:25:22.131-0400] [] [INFO] [] [com.ensode.faulttolerance.FaulToleranceExampleResource] [tid: _ThreadID=78 _ThreadName=http-thread-pool::http-listener(2)] [timeMillis: 1634840722131] [levelValue: 800] retryExample() invocation succeeded
The @Retry annotation provides a few attributes we can use to control things like how long to retry, how many times to retry, etc. Table 7-1 lists all attributes for @Retry.
Table 7-1

@Retry Annotation Attributes

Attribute

Description

Default

abortOn

An array of Throwable types that should not trigger a retry

 

Delay

The delay between retries

0

delayUnit

The unit of time for the delay attribute

ChronoUnit.MILLIS (milliseconds)

durationUnit

The unit of time for the maxDuration attribute

ChronoUnit.MILLIS (milliseconds)

Jitter

Used to randomly vary retry delays. Actual delay will be [delay - jitter, delay + jitter]

200

jitterDelayUnit

The unit of time for the jitter attribute

ChronoUnit.MILLIS (milliseconds)

maxDuration

Specifies how long to retry for

180000 (when combined with the default durationUnit of milliseconds, this is equivalent to three minutes)

maxRetries

The maximum number of times to retry, a value of -1 indicates no maximum (retry forever)

3

retryOn

An array of Throwable types that should trigger a retry

java.lang.Exception

Defining a Maximum Duration for Execution

MicroProfile Fault Tolerance provides a @Timeout annotation we can use to specify the maximum time to allow a RESTful web service endpoint to execute. If the method takes longer than the specified amount of time, a TimeoutException is thrown.

The following example illustrates how to use the @Timeout annotation:
package com.ensode.faulttolerance;
//imports omitted
@RequestScoped
@Path("faulttoleranceexample")
public class FaulToleranceExampleResource {
  private static final Logger LOGGER = Logger.getLogger(FaulToleranceExampleResource.class.getName());
  @Timeout(value = 3, unit = ChronoUnit.SECONDS)
  @GET
  @Produces(MediaType.TEXT_PLAIN)
  @Path("timeout")
  public String timeoutExample(@QueryParam("delay") long delay) {
    try {
      TimeUnit.SECONDS.sleep(delay);
    } catch (InterruptedException ex) {
      LOGGER.log(Level.INFO, "sleep() interrupted");
    }
    return "Call returned successfully ";
  }
}

In our example, we are specifying that our RESTful web service endpoint should not take more than three seconds to execute, as can be seen in the example, how long to wait before a timeout is specified by combining the value and unit attributes of @Timeout, with the former containing a long value and the latter containing the unit of time. Default values for value and unit are 1000L and ChronoUnit.MILLIS (milliseconds), for a default timeout value of one second. In our example, we are simply suspending execution for the amount of seconds received as a query parameter; this way, we can force the endpoint to time out so that we can see it in action.

If we pass a value of 4 as a query parameter to our endpoint, we can see the @Timeout annotation in action.
$ curl -i  http://localhost:8080/faulttolerance/webresources/faulttoleranceexample/timeout?delay=4
HTTP/1.1 500 Internal Server Error
Content-Language:
Content-Type: text/html
Connection: close
Content-Length: 1494
X-Frame-Options: SAMEORIGIN
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"><html xmlns="http://www.w3.org/1999/xhtml"><head><title>Payara Micro #badassfish - Error report</title><style type="text/css"><!--H1 {font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;font-size:22px;} H2 {font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;font-size:16px;} H3 {font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;font-size:14px;} BODY {font-family:Tahoma,Arial,sans-serif;color:black;background-color:white;} B {font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;} P {font-family:Tahoma,Arial,sans-serif;background:white;color:black;font-size:12px;}A {color : black;}HR {color : #525D76;}--></style> </head><body><h1>HTTP Status 500 - Internal Server Error</h1><hr/><p><b>type</b> Exception report</p><p><b>message</b>Internal Server Error</p><p><b>description</b>The server encountered an internal error that prevented it from fulfilling this request.</p><p><b>exception</b> <pre>javax.servlet.ServletException: org.eclipse.microprofile.faulttolerance.exceptions.TimeoutException</pre></p><p><b>root cause</b> <pre>org.eclipse.microprofile.faulttolerance.exceptions.TimeoutException</pre></p><p><b>note</b> <u>The full stack traces of the exception and its root causes are available in the Payara Micro #badassfish logs.</u></p><hr/><h3>Payara Micro #badassfish</h3></body></html>

As we can see from the output of the curl command, we receive an HTTP error 500; the response body we receive is automatically generated by Payara Micro; by examining the output, we can see that a TimeoutException was thrown; this is the expected behavior since the endpoint took longer than the specified timeout of three seconds.

If the method takes less than the specified timeout value (three seconds, in our example), then the method executes normally.
$ curl -i  http://localhost:8080/faulttolerance/webresources/faulttoleranceexample/timeout?delay=2
HTTP/1.1 200 OK
Server: Payara Micro #badassfish
Content-Type: text/plain
Content-Length: 27
X-Frame-Options: SAMEORIGIN
Call returned successfully

Summary

In this chapter, we covered Payara Micro’s support for the MicroProfile Fault Tolerance API.

We discussed how to make RESTful web services asynchronous so that clients don’t have to block waiting for them to return.

We also saw how we can limit the number of concurrent executions of a RESTful service endpoint, preventing a buggy or malicious client from overloading the system.

Additionally, we covered how to transparently stop invocations to an endpoint that keeps repeatedly failing.

Also, we saw how we can automatically invoke a fallback implementation when a RESTful web service fails.

Additionally, we saw how to automatically retry failed RESTful service endpoint invocations; lastly, we covered how to specify the maximum amount of time we allow an endpoint to execute before it times out.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.144.115.16