Kie Execution Server

We've already discussed the possibility of having a specific Drools oriented service to run our rules in an isolated environment. The Kie Execution Server (or Kie Server for short) is an out of the box implementation of such a service. It is a modular, standalone server component that can be used to execute rules and processes, configured as a WAR file. It is currently available for web containers and JEE6 and JEE7 application containers.

The main purpose of the Kie Server is to be a runtime environment for Kie components, and one that uses as few resources as possible in order for it to be easily deployed in cloud environments. Each instance of the Kie Server can create and use many Kie Containers, and its functionality can be extended through the use of something called Kie Server Extensions.

Also, the Kie Server allows us to provide Kie Server Controllers. These are endpoints which will expose Kie Server functionalities. In a sense, they will be the front-end of our Kie Server.

Let's take a look at how we can configure these components inside our Kie Server instances.

Configuring Kie Server

Kie Server provides two default Kie Server Extensions: one for Drools and one for jBPM. Even though they are the only ones currently provided, Kie Server Extensions are thought to be something we can add to the Kie Server in as many flavors as we need. The way the Kie Server will load them is through the ServiceLoader standard: a set of files included in the META-INF/services folder of the application classpath with information about the different implementations of expected interfaces.

In our case, the expected interface is KieServerExtension, so we will need a META-INF/services/org.kie.services.api.KieServerExtension file, whose only contents will be the name of the implementations of that interface.

In this case, we have an example of such a configuration in the projects under the chapter-11/chapter-11-kie-server folder of our code bundle. This project adds an extra feature to the Kie Server by making sure all Kie Bases inside it have statistics published through JMX. We have a CustomKieServerExtension Java class that defines a series of methods:

  • init/destroy: These let us define how to start/stop the associated server components related to providing a specific service in our Kie server. In our case, we're just making sure we have JMX enabled by asking for the MBean Server.
  • createContainer/disposeContainer: For every Kie Container used in our Kie Server, we can define these methods to do special treatment for them. Since our functionalities will be targeted at Kie components mostly, this is the proper connection point for our special services targeted at created Kie components. In our case, we're registering the JMX beans using the special methods in the DroolsManagementAgent singleton class:
    DroolsManagementAgent.getInstance().registerKnowledgeBase(kbase);
  • getAppComponents: These methods will be used by other extensions to get information about the exposed services we started with in our extension (if any).

Once these have been deployed in an app server, we will need to create a user with the kie-server role in the aforementioned server, and we will be able to access our deployment through the http://SERVER/CONTEXT/services/rest/server/ URL. The following is an example of an expected response:

Configuring Kie Server

Inside the capabilities, we can see Statistics is one of the exposed capabilities. That is the one extension we created. Any functionality can be exposed in this way, like a special protocol exposition to our other Kie Server Extensions (that is, through the Apache Mina - https://mina.apache.org - or RabbitMQ - https://www.rabbitmq.com - communication protocols).

Note

Note: When we run this example inside our test, it will create a Wildfly App Server (http://wildfly.org) instance and deploy our customized Kie Server inside it. In order for it to work properly, we also create a few configuration files inside that server. You can review the assembly steps of the project inside the POM file of the kie-server-tests project for the Wildfly server. If you wish to configure it for any other App or Web Server, here's a detailed list of how to configure it for other environments: https://docs.jboss.org/drools/rele.

Default exposed Kie Server endpoints

As for the API exposition from the Kie Server, it comes in two main flavors: REST and JMS. These two endpoints work with the same commands for creating/disposing containers, and operating against Kie Sessions. These two endpoints are used in almost the same way if we use a client utility called KieServiceClient, available in the org.kie.remote:kie-remote-client Maven dependency. Internally, however, they work in very different ways.

REST exposes the functionality of Kie Containers through a REST API. It provides a very useful way to interact with any type of system, since any commercially used language now comes with APIs to invoke REST APIs. This is the best choice both for interacting with applications written in other languages and for the initial use of the API. A full description of the REST API exposed through the Kie Server can be found here: https://docs.jboss.org/drools/release/6.3.0.Final/drools-docs/html/ch22.html#d0e22326.

JMS exposes the functionality of Kie Containers through three specific JMS queues, called Kie.SERVER.REQUEST (for handling incoming Command requests), Kie.SERVER.RESPONSE (to send back a response), and Kie.SERVER.EXECUTOR (for asynchronous calls, mostly used by BPM components). Since JMS is naturally asynchronous, it makes it the best choice when creating distributed environments; Each of the Kie Servers available at any time can compete to take messages from these queues, so high availability and performance are naturally managed with the growth of requests.

There are two examples in the code bundle of using these APIs. Both can be found in the chapter-11/chapter-11-kie-server/kie-server-test folder, under the names RESTClientExampleTest and JMSClientExampleTest, for REST and JMS respectively. They are extremely similar, with the exception of how the KieServicesClient class is initialized:

KieServicesConfiguration config =
KieServicesFactory.newRestConfiguration(
    "http://localhost:8080/kie-server/services/rest/server",
"testuser", "test", 60000);
KieServicesClient client = KieServicesFactory.
    newKieServicesClient(config);

In the previous code, we see the initialization block for a Kie Server Client that uses REST as an endpoint configuration for a Kie Server running in http://localhost:8080.

Besides performing these deployment managements through code, the Kie projects provide a set of usable workbench tools that allow us to create, build, and deploy rule definitions in any Kie Server without having to write any code. These tools are referred to as Workbenches, and we'll see an introduction to how they work in the next section.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.223.209.180