This chapter explores what it means to implement event processing in a commercial IT environment. We’ll begin by looking at how service-oriented architecture (SOA) relates to event-driven architecture (EDA). Then, we’ll discuss how projects that implement event processing differ from those that don’t, and suggest six specific action items for achieving success in event processing.
SOA is probably the most-talked-about architectural style for modern business applications. Unfortunately, the talk can be confusing because people use the term SOA to mean a variety of things. To some, SOA is a general term for any new distributed application. To others, it means an application that was built using web services technologies, specifically the Web Services Description Language (WSDL) and the SOAP communication protocol. Yet others have a concept of SOA that requires the use of the Web, representational state transfer (REST), or business process management (BPM). With this diversity of definitions, it is not surprising that some people think that SOA is obsolete or unimportant while others think that it will solve most of the world’s problems. The reality is, of course, somewhere in between—SOA is a set of good ideas but it’s not a panacea.
SOA and EDA are complementary notions. SOA applications, like other applications, can use request-driven or EDA interactions among their components. Conversely, many business applications that use EDA will also use SOA for some aspects of their construction. To clarify the overlap of SOA and EDA, we’ll start by defining the nature of SOA.
SOA is based on the venerable concept of a service. One party, the service provider, performs a function to assist another party, the service consumer. Long before computers were invented, companies organized many parts of their operations as services. Today they use copy services, package and mail courier services, company cafeterias, shipping departments, accounting departments, human resources departments, legal services, security services, janitorial services, personnel recruiters, auditors, advertising agencies, and travel agencies. At a business level, a service provider is a department, workgroup, or some other business unit. For example, the HR department provides recruiting and other personnel-related services to other business units. Similarly, the IT department is a service provider to other departments. Some service providers are part of the company (they are internal departments) and others are part of another company (the work is contracted outside the company, or “outsourced”).
In this context, a service consumer refers to a business unit that uses the service; it does not mean a private individual person acting as a customer. The “service” is technically the consumer’s view of the provider’s capabilities. A business unit can be a service consumer in some relationships and a service provider in others. The travel department is a consumer of the janitorial service every day, and it is a service provider to the accounting department when, for example, an accountant needs to travel.
Services are shared. All of the business units use the cafeteria, IT department, and enterprise network so that each group doesn’t have to implement its own version of these functions. Moreover, a new company can start up more quickly by leveraging outsourced services from copy services, caterers, package and mail couriers, law firms, travel agencies, and other service providers.
The service structure improves a company’s flexibility and efficiency. Each service provider is modular and largely autonomous. A company can replace its advertising agency without bringing in a new janitorial service because they are unrelated. It’s relatively easy for each business unit to work with other service-providing business units because there is an informal or formal mutual understanding of the nature of the service. For example, the service provided by the company cafeteria is defined by a sign that lists the hours of operation, the menu, and the prices. If the cafeteria is outsourced, the service description will also be written into a legal contract between the company and the caterer.
Consumers generally don’t have to know or care about the internals of a service provider’s operation. The service is a “black box,” so the consumer is relieved of the burden of understanding the details of the provider’s operation. The provider benefits because it has the freedom to change its internal processes without telling the consumer, as long as the terms of the contract are not affected. For example, a mail and package courier can reroute its airplanes, build a new hub, or change the way it runs its sorting operations without needing permission from its customers as long as the service-level agreement (SLA) is not affected.
The characteristics that make the service concept helpful in organizing business units are translated into the software realm as SOA. SOA is defined as an architectural style for application software in which five principles are implemented:
The application must be modular, so that software components (agents) can be added, replaced, or modified individually.
The components must be distributable. They must be able to run on different computers and communicate with each other by sending messages over a network at run time.
Component interfaces must be “discoverable” by another application developer. The interfaces and related externally visible characteristics of the software components must be clearly defined and documented in metadata. Metadata describes the input and output messages of each component and enough other information to enable developers to find and use the component as part of a new application. SOA metadata is almost always in a software form, such as in a file, web page, message, registry, or repository.
A software component that provides an SOA service can be readily swapped out for another component that offers the same service as long as the new one uses the same interface as the old one. The interface design (“what to do”) is separate from the internal service implementation (“how it is done”).
Service provider components must be shareable (or “reusable”). This means that they are designed and deployed in a manner that enables them to be invoked successively by disparate application systems or by multiple copies of the same application. The same code and data are available to users of any application that shares that component.
Any business application that implements these five principles is an SOA application. The combination of the first four principles implies that SOA components are “loosely coupled,” a property that leads to flexibility. An SOA component can be added to the system or modified without causing unintended side effects or otherwise disrupting other components as long as the interface is constant. SOA systems can be developed, maintained, and expanded in small, easily understood increments, facilitating an “organic” approach to ever-changing business processes.
SOA emerged as a software architectural style during the 1990s as companies began implementing distributed applications on a large scale. The five principles of SOA represent what architects learned about best practices as they gained experience with component software. The term SOA first appeared in a 1996 Gartner report, but mainstream developers didn’t use SOA routinely for another decade. It’s now rare to build a new, distributed business application without adhering to the principles of SOA. The benefits of documenting the interface, making components replaceable, and making components shareable are clear. Modern software development tools make it easy to implement these characteristics. Experts differ on whether SOA is best implemented using REST or more-conventional interface styles, but few dispute the merits of SOA (using the definition given here).
The majority of the interactions in an SOA application are request-driven. It’s natural for a service consumer to send a message to a service provider to request a service, and then get a reply containing some data or an acknowledgment that the service has been completed. However, most SOA application systems also have some aspects that are implemented using event-driven interactions.
Architectural styles are composable in the sense that multiple styles can apply simultaneously to one application system. SOA and EDA are composable—they are compatible and complementary. In Chapter 3, we explained the five principles of EDA:
Notifications report a current event as it happens.
Notifications are pushed not pulled.
Consumers respond to events as soon as they are recognized.
Notifications are one-way messages.
Notifications are free of commands.
If an interaction conforms to the five principles of SOA and the five principles of EDA simultaneously, it is event-driven SOA. Virtually every SOA interaction, including request-driven SOA interactions, adheres to the first three principles of EDA. A request message is immediate, the request is pushed, and the service provider responds immediately. However, event-driven SOA diverges from request-driven SOA because of the last two principles. The notions of one-way, “fire and forget” messages and “free of commands” are what make event-driven SOA different from request-driven SOA. They are also what make EDA minimally coupled, whereas request-driven SOA is more coupled (despite the fact that it is loosely coupled compared to other kinds of request-driven interactions).
Conversely, most EDA interactions in business applications also qualify as SOA. EDA interactions are usually modular because of the separation of event producer and event consumer. They’re also “swappable” because the producer or consumer can be replaced without modifying the other. Most EDA applications are also distributable (the event consumer can be on a different computer than the producer is on), discoverable (the event schema and other interface metadata are reasonably accessible to other developers), and shareable (an event consumer can receive notifications from multiple event producers and a notification from one producer can be delivered to multiple consumers). However, not all EDA systems qualify as SOA. Some EDA applications run entirely on one computer (the event producer and consumer are on the same platform). Developers can hide the interface metadata so that other developers can’t send or receive notifications from an EDA component. The channel used between a producer and consumer can be closed to other components.
Event-driven SOA interactions should be used for those aspects of an SOA application for which the component that acts first doesn’t need a response from the second component. Event-driven SOA is especially useful in data-consistency, information-dissemination, and situation-awareness scenarios. Conventional requestdriven SOA interactions should be used for the aspects of an application that require a reply to the first component.
The process of developing an SOA application that includes both event-driven and request-driven services is similar to the process used for developing a request-drivenonly SOA application with a few important differences. The next two sections of this report explore best practices for developing SOA applications that have both types of services. First we’ll look at the communication and granularity issues that arise when specifying SOA services and events. Then we’ll look at how the relatively new concept of “service components” can improve the quality of an SOA application.
Many of the issues that arise when designing request-driven SOA services also arise when designing event notifications. In both cases, analysts and software engineers must document the contract or interface between the components. This includes defining the contents of the request, response, and notification messages (for example, XML documents). For request-driven services, the function of the service provider must also be specified: Will it GET certain documents, PUT some data in a database, LOOK_UP_CUSTOMER_CREDIT_RATING, or perform some other function? Event-driven interactions don’t specify a function, but the developer must specify the message topic and sometimes other properties associated with the notification.
For both kinds of interface, developers must also resolve communication and service quality issues, such as:
How will the sender obtain the address of the appropriate message recipient?
What should be done to ensure security and privacy?
Is the sender responsible for trying to send the message again if the first attempt fails, or will this be handled by the channel?
Can the message be delivered twice or will this cause errors in the application logic? (Look up idempotence in Wikipedia if you’re interested in pursuing this question further.)
What should be done if the recipient isn’t available?
Is the message compressed?
Is the message part of a group of related messages that must be handled together, or is it an individual message?
These issues aren’t unique to SOA or EDA—they are inherent in the design of any distributed application. We won’t address them here. However, the granularity of a service or event has emerged as an especially important consideration for all SOA interfaces, so it warrants some discussion.
Granularity for request-driven interfaces is the extent of the function that will be provided by the service provider. Granularity for an event notification is the extent or scope of the happening that is described by the event object. There is no simple formula for determining the proper granularity of an SOA interface. Analysts and architects must understand the business process and its data to identify the granularity of services and events for a particular business purpose.
As a rule, a well-designed SOA software service should usually map directly to a business task. Business managers, analysts, and software engineers can use the same service as a natural unit of composition on both a conceptual design level (describing a task in a business process) and an implementation level (describing the work of a software component that supports that task). For example, “Verify ZIP code for this address” is a narrowly focused task that is readily understood by business analysts and end users as part of a business process. It can (and should) also be implemented as an SOA service by a software component that can be shared by multiple applications.
Similarly, a well-designed business event object should usually map directly to a business document, part of a document, or something that could have been a document. A transactional notification is often modeled on a paper form in the same way that an SOA service interface is modeled on a business function. For example, a purchase order form, purchase order acknowledgment, and invoice can be used as starting points to design the respective transactional notifications.
A more complex task, such as “Give me this customer’s profile information,” also constitutes a potentially good request-driven service because it is meaningful to businesspeople and it represents a single, potentially shareable task. However, this broader, coarse-grained function would typically be a composite service because it involves multiple subordinate tasks. The service provider component for this service could invoke a series of other services to retrieve demographic information, account balances, transaction information (from mortgage, savings, checking, and credit card systems), and reports of recent customer activity on a website. A composite service coordinates the work of simpler services. It may be implemented in a regular programming language, such as C#, Java, or Visual Basic, or in a BPM tool using, for example, Business Process Execution Language (BPEL). In object-oriented programming, the component would be called a process object because it directs the work of other objects.
A complex event bears some similarities to a composite service. It is coarse grained in the sense that it represents the collective significance of a set of simpler entities (its base or member events). However, the complex event summarizes and abstracts multiple simpler event objects, which are data, whereas a composite service summarizes and abstracts multiple simpler services, which are functions. Functions may encapsulate data behind the scenes, but the component that sends the request doesn’t see the data directly; it only sees the request and response messages. The fundamental distinction between requests and events makes the notion of granularity different.
Conventional wisdom holds that SOA services are coarse grained. However, that doesn’t mean that SOA services are always broad in their scope. As you have seen, some good services are narrowly focused (“verify ZIP code”). A request-driven service should be narrow enough that its work is entirely directed at a single task (it is functionally coherent). That makes the service easier to implement and more likely to be useful in multiple different applications. Functions that are of potential use in multiple applications are good candidates to be services. If a function is unique to one application, it should probably be combined into the service consumer (requester) component to avoid the complexity and overhead of building a separate service provider component and sending messages over a network.
On the other hand, services should be broad enough to perform an entire task. This reduces the number of services that must be used to carry out a business transaction. Calls to invoke an SOA service are different from most procedure calls or method invocations because SOA is intended to work across a network. Most procedure calls and method invocations have relatively low overhead because they take place within one computer. By contrast, an SOA service provider is usually on a different computer than the service consumer (requester), so every request and reply message pair entails significant overhead and latency. Latency can grow to unacceptable levels if a service is invoked too many times in the course of one transaction. Therefore, services should be designed to encompass enough function to justify the overhead of communicating over a network (that is also true for remote procedures and remote methods). Developers and architects who are interested in understanding service design in more depth will benefit from reading Martin Fowler’s Patterns of Enterprise Application Architecture (see Appendix A).
Note: Developers should design SOA services to minimize the number of messages that they receive and send, because they are invoked over a network. Service interactions should be less “chatty” (have fewer back and forth messages) than interactions with local (intra-computer) modules.
A similar set of considerations applies to event design. It’s impractical to put a notification onto a company’s network for every business event that occurs. Large companies capture 10,000 to 10 million business events per second in software through sensors, application systems, market data feeds, and other event sources. That’s almost a billion events per day at the low end and almost a trillion in other companies.
The solution is to distribute event-processing logic to reduce the volume of notifications as close to the source as possible. Developers use event-processing agents (EPAs) to filter events at the source application, sensor, or adaptor that sits between the source and the network. In a few sophisticated systems, the filter can be adjusted dynamically at any time. EPA can get instructions from elsewhere in the event-processing network (EPN) to stop publishing notifications that no event consumers want to receive. In rare cases, the EPA may be able to stop the source sensor or application from generating the notifications entirely. More commonly, the EPA merely filters (discards) notifications that are not wanted, forwarding only those notifications for which there is an event consumer.
It doesn’t make sense to flood the network with thousands or millions of simple notifications if multiple event consumers are really interested in the same, higherlevel event notification. Distributed EPAs can aggregate simple notifications and find patterns so that a few complex-event notifications are published onto the network rather than thousands of simple events. Chapter 7 noted that the volume of events is typically sharply reduced at each level in an event hierarchy. Developers generally don’t have much control over the type of raw event information that is available. They have to use whatever a sensor, application, or web source can produce. However, they can control the nature of the notifications that are published on the enterprise network through the use of CEP-capable adaptors.
A few commercial CEP platforms have been specifically designed to operate on a distributed basis. They filter events and synthesize complex events near or at the event sources to minimize the number of notifications that must be sent on the network and to reduce the need for multiple event consumers to recalculate the same complex events.
Note: Complex-event notifications should be shared (“reused”) among event consumers for the same reasons that request-driven services should be shared wherever practical.
Arguably the most important advance in SOA maturity in the past decade is the service component concept. The essence of a service component is to have more-complete metadata about the SOA service provider. Prior to service components, most SOA applications had a vulnerability found in procedure calls, most forms of object-oriented programming, and early Common Object Request Broker Architecture (CORBA). This vulnerability was that their metadata focused on documenting the interfaces through which they were invoked (their input and output parameters) but ignored their dependencies on the services that they invoked. There was no formal way to discover what services a service called unless you had the source code of the service. There also wasn’t enough information about the local security and integrity characteristics of the service provider. By 1998, some architectural experts such as Clemens Szyperski (see Appendix A) had clearly described the problem and laid out the solution. An excellent explication of the service component concept can also be found in Zoran Stojanovic’s A Method for Component-Based and Service-Oriented Software Systems Engineering (see Appendix A).
In a service component approach, the metadata for a service provider component covers the input and output messages, the identity of other services and remote modules that it invokes, the databases that it uses, its possible error conditions, and its security, performance, and availability policy characteristics. Vendors are moving to implement the service component concept, and some efforts are even being made to standardize it. The CORBA Component Model specification in CORBA v.3.0 outlined how to implement the service component concept as early as 1999, although the popularity of CORBA itself had already started to decline, so it did not achieve mainstream usage. The modern design model that underlies Microsoft’s Windows Communication Foundation (WCF) represents a vendor-specific way to implement service components. Finally, the Organization for the Advancement of Structured Information Standards (OASIS), an international open standards consortium, formed six new technical committees in August 2007, under an umbrella called the OASIS Open Composite Services Architecture Member Section, to develop the Service Component Architecture (SCA) specification. Most of the early work on service components was directed at requestdriven services, but recent efforts have been undertaken to extend it to improve its support for event-driven services.
There are no event-processing projects per se; there are only development projects that use event processing in some aspects of a new or modified application system. Almost all large or complex business processes have some aspects that should be implemented with EDA, CEP, or both. To succeed at event processing, an enterprise must do six things well:
Acquire event-processing skills by either training its staff or hiring outside consultants
Incorporate event processing into its IT architecture
Use event-enabled packaged applications
Implement a software infrastructure that facilitates event processing
Integrate event processing into SOA initiatives
Develop event models and manage events and event patterns
The remainder of this chapter explores these action items in more detail.
Business and system analysts, enterprise and application architects, project leaders, and anyone else involved in collecting application requirements, modeling business processes, and developing high-level system designs must understand when and how to use EDA and CEP.
Simple, event-at-a-time EDA is underutilized in most companies because analysts and software engineers often fall back on more-familiar design patterns, particularly batch processing and request-driven design patterns. It’s not that EDA is hard to understand—much of it is common sense. However, analysts and architects who haven’t used MOM, document-driven systems, or similar techniques may be unfamiliar with some of the mechanical details of EDA. Chapters 3, 6, and 7 address these issues and clarify how to identify the parts of a business process that would benefit from EDA.
Application developers often build rudimentary forms of CEP into applications without realizing that they are doing CEP. Whenever you write application logic that combines two or more messages that contain data about business events, or use any other set of data that reports two or more events, you are doing CEP. When the logic is not complicated, there may be nothing to gain by calling it CEP or learning anything more. However, a formal understanding of CEP and the use of commercial CEP platforms are helpful if the volume of events is high, the latency of response must be low, and the rules for processing the event patterns will change fairly often. Chapters 3, 7, and 8 provide information on this subject.
CEP is taught in a few computer science programs, mostly at the graduate school level, sometimes under the label of “event stream processing” or “stream processing.” The Event Processing Technical Society (see www.ep-ts.com) also has a workgroup that is developing a curriculum for event processing in collaboration with researchers and educators at a group of universities. Appendix A provides pointers to books and other resources that can help you if you want to learn more about this field.
Companies that want to accelerate the development or lower the risk of a project that involves event processing may hire architects, developers, or project managers from a system integrator or software vendor. Consultants at a system integrator with a background in EDA and CEP are typically associated with a BPM or SOA team.
Most CEP software vendors have consulting practices. These are usually good sources of product-specific advice and general help on CEP implementation. Virtually all vendors of MOM, enterprise service buses (ESBs), and other SOA infrastructure have consultants that understand simple EDA and its role in modern application development. Some software vendors and system integrators offer application templates or industry frameworks that include software products, data models, best practices, and sample application flows that incorporate event-processing features. Outside experts may help your staff learn more quickly, especially if your people actively work with them.
Some leading-edge corporate IT architecture programs are beginning to pay attention to event processing, although IT architecture historically did little or nothing to explicitly address EDA or CEP.
Companies that have formal enterprise architecture guidelines that prescribe design patterns should document their guidance on when to use event-, request-, and timedriven patterns (see Chapters 3 and 6 for a discussion of some of the basic principles). Some companies separate enterprise architecture from application architecture. Enterprise architecture is strategic and general in scope—it answers questions such as, “How should all of our systems work?” Application architecture (sometimes called solution architecture) is more tactical and specific—it answers questions such as “How should this system or set of systems work?” Both types of architecture should allow all three design patterns and provide advice on where to use each.
Companies that have a review process that verifies application system design as part of the development cycle should examine the conceptual design of every new system to confirm that EDA and CEP have been utilized where appropriate. The architecture review committee should expect almost every large application system to have some EDA components. Most large systems should also have some continuous-monitoring capabilities. These will leverage observational notifications and do some type of CEP, although commercial CEP platforms will appear in a minority of projects during the next several years.
The presence of event-processing features should be a factor when selecting packaged applications, for the same reasons that it is a factor in the design of good custom applications.
Packaged applications were historically poor at emitting event notifications when a business event was detected or generated within the scope of their processing. They did a reasonably good job of accepting and responding to incoming events, but they didn’t do as well for sending outgoing notifications to other applications. This has changed, however. Many packaged applications can now emit event notifications for many types of business events. This is usually a configuration option, so developers who are installing and tailoring the package must study the business requirements to identify what type of notifications should be sent. Your other applications will have use for some, but not all, of the events that a packaged application could potentially send.
Some packaged applications are much better at event processing than others, so it’s worth the effort to investigate how well it is implemented. EDA can be particularly important for application integration scenarios that require heterogeneous applications of disparate origins to work together.
Many packaged applications have business dashboards or other continuous-monitoring and alerting features as part of the product. The vendor will rarely refer to these as “CEP” capabilities, although they technically would qualify as a limited type of CEP. Monitoring features are visible and generally desirable, so their presence will probably be pretty obvious if you get a demonstration of the software before you buy it. Most dashboard and monitoring features in packaged applications are fairly limited in scope. They monitor activities and parts of business processes that are conducted within the package, but they are not designed to monitor events that occur in other applications or elsewhere in the company.
Event processing is a design concept that doesn’t necessarily require any particular type of middleware or development tool. However, many event-processing applications would be impractical without the use of appropriate commercial middleware infrastructure or CEP tools.
EDA and CEP systems need some messaging infrastructure to convey the notifications from event producers to event consumers. All large companies already use the Web, e-mail, and other basic message-capable communication protocols, and most already use MOM, ESBs, and SOAP in some locations. Therefore, more than half of all eventprocessing projects don’t need any new messaging software, because they can use the infrastructure that is already in place. However, projects that are implementing demanding new applications that have high volume, low latency, high integrity, or other requirements may need to acquire commercial MOM, ESB, or other infrastructure products if they are not already present. From the network and middleware perspective, notifications are just messages. A company’s event network is conceptual—it’s physically just a part of the general network that carries voice, e-mail, web inquiries, remote procedure calls (RPCs), DBMS traffic, and other communication.
Guidance on selecting messaging technology should be part of your enterprise and application architecture programs, if you have such programs. Companies should also have a messaging technical support group equipped with configuration and monitoring tools for deploying the messaging infrastructure and managing it at run time.
Before 2004, developers of applications that perform CEP usually wrote their own CEP logic, rather than buying a commercial CEP platform or other commercial product. Even today, some demanding CEP applications and most dashboards and other monitoring capabilities are built with standard application development tools or offthe-shelf utilities. For low volume or simple applications, this is usually still the best strategy. However, more-demanding CEP applications require sophisticated algorithms and purpose-built CEP architectures, so companies will sometimes be better served by commercial CEP technology.
Commercial CEP, dashboard, and monitoring technology can be acquired as part of a CEP-enabled application, such as a financial trading platform or supply chain management (SCM) product, or it can be bought as a dedicated CEP platform, business event processing (BEP) system, appliance, or other point product. Companies that acquire CEP, dashboard, or monitoring software generally make their decisions on a project-by-project basis. These products have disparate specializations, so they are chosen according to the unique requirements of each application.
Where practical, architects and project leaders should use the same CEP, dashboard, BEP, and monitoring products for multiple projects to minimize license fees and training and support costs. However, virtually all companies will acquire multiple, partially overlapping, event-processing-related products during the next five years. At some point in the future, some companies will designate a preferred CEP platform, dashboard-building tool, or monitoring product as part of their enterprise technology architecture for use in projects around the company. However, the nature of CEP projects varies so much that no one set of products will be right for all CEP applications in a large company.
Many development projects that implement simple EDA, CEP, or both are promoted under the aegis of an SOA or BPM strategy. EDA, SOA, and BPM are compatible concepts that are often used together. An event-driven SOA application implements the principles of SOA and EDA simultaneously. Moreover, systems that leverage BPM generally use CEP in their monitoring capabilities (see Chapter 10 for more explanation of the links between BPM and event processing).
In view of these overlaps, companies should fold much of their EDA and CEP work into their SOA programs. Companies generally should not build a separate competency center (or “center of excellence”) for EDA. The SOA competency center may have one or two architects or analysts who have more training or experience in EDA, MOM, or CEP than other members of the team, but they should all be part of one team.
Conversely, it is a mistake to implement an SOA program that can’t support eventdriven SOA and some type of continuous business monitoring from its inception. Some companies have shied away from event-driven SOA, consciously or not, to focus exclusively on traditional request-driven SOA. However, this results in overusing request-driven SOA and batch (time-driven) processing in new SOA systems. There is no benefit to deferring adoption of event-driven SOA—it’s not a major burden to undertake and its advantages should be tapped even by the initial SOA applications.
Incorporating event-driven SOA and CEP-based monitoring into SOA and BPM initiatives involves more than just organizational and training issues. Many messages used in event-driven SOA should be implemented with the same management tools, industry standards, security mechanisms, and middleware infrastructure products that are used for request-driven SOA interfaces. For example, event-driven SOA is commonly implemented with XML messages, so the same metadata format (for example, XSD) can also be used for event notifications. An ESB- or MOM-based SOA infrastructure and protocols such as HTTP and SOAP can support both event- and request-driven communication. However, most commercial SOA registry and repository products are primarily designed to support request-driven SOA so developers may need to undertake custom extensions to fully support event processing.
Some CEP projects are too specialized to belong under an SOA or BPM program, for organizational and technical reasons. They may have specialized event sources (such as third-party event data feeds); their own ways to define event types; specialized communication protocols; non-XML (binary) data formats; and alternative monitoring tools and metadata management technology.
Event-processing functions coexist and are intertwined with transactional business functions, social application activities, office productivity applications, collaboration, and other kinds of application functions. With the exception of some pure-play monitoring systems (see Chapter 8), event processing is just a description of how part of an application system works.
The design of business events, notifications, and event patterns is partly a black art, like data modeling, form design, or designing SOA services. Analysts, architects, and software developers must determine all the following:
Which business events are important and relevant
What notifications should be generated to report those business events, and the topics (or “subjects”) of the notifications
What data items to include in the event types
Which application components will produce each kind of notification
What channels will transport the notifications
What CEP algorithms to apply, and what event patterns to look for (this is pattern discovery; see Chapter 7)
What the consumer components and people should do with the notifications and alerts once they get them
Events, notifications, and event patterns are designed at the same point in the development cycle that SOA services and databases are designed. This is typically after the business requirements and overall process model have been explored but fairly early in the development cycle. In modern iterative development approaches, this is a recurring activity. After some initial parts of a system are deployed, they are used, monitored, and periodically refined, and additional segments of the application are added.
An application system typically uses only a few kinds of event types—often under 10, and usually under 25. However, the business processes in a large department or business unit may collectively involve a dozen or more application systems with many dozens or even hundreds of events, channels, and event patterns. A large company will have hundreds of application systems and thousands of business processes, potentially requiring thousands of different event types, channels, and patterns.
A company’s event traffic is an event “cloud,” not an event stream, because it isn’t organized in any systematic way. A company’s event cloud is part of the larger event cloud of its virtual enterprise that encompasses some of its suppliers, customers, and other business partners. The virtual enterprise event cloud is, in turn, a subset of the global event cloud that includes all companies, consumers, governmental bodies, and other organizations that produce or consume events.
We are not aware of any company that has developed an enterprise-wide map or registry that documents all its events and event patterns. An enterprise event cloud is probably too complex to ever centrally manage at a detailed level. A company that formally documents and manages even some of its important events is ahead of the industry mainstream because many companies don’t do any systematic event management. However, we are aware of some leading-edge companies who are beginning to manage their events in a federated fashion, similar to the way they manage their SOA services.
As indicated in the previous section, event management should usually be conducted by people within an SOA competency center rather than in a separate organization. Many large companies already have SOA competency centers in each division, subsidiary, functional area, or large department. These are typically federated and loosely overseen by a central, corporate-level SOA competency center. The central organization sets company-wide policies, provides architectural guidance, and coordinates the communication among the distributed competency centers. Managers try to balance the competing forces of central control and local autonomy.
A federated SOA competency center can be extended to cover event management. If a competency center faithfully collects and stores metadata for all notifications and event types in a registry or repository, project teams can use them in future application development wherever they are relevant to the new business requirements. Event streams used for one purpose will sometimes be relevant for new purposes. Systematic event management can reduce, although not entirely eliminate, redundant event data.
A significant amount of event sharing (“reuse”) occurs after the fact with event logs. Numerous important applications are already in use that mine event logs from previous hours and days to provide insight into things that have happened. Customer buying preferences are detected, credit card fraud is discovered, and airline schedules are studied and improved by using event logs.
A person or group in a competency center should systematically collect and maintain metadata for all events types, notifications, and event patterns that are used in the applications and SOA services in the business unit. If an event is formatted in XML, its metadata can be managed in an XML Schema Description (XSD), Schematron, or similar mechanism, just as request-driven SOA documents are. However, the reports generated for developers and managers need a way to distinguish event notifications from other kinds of SOA documents. Moreover, events that use binary data will require different metadata mechanisms than those used for XML data.
Governance for events lags the governance of request-driven SOA services in its maturity. Leading-edge companies are beginning to systematically manage notification topics and event types, but we are not aware of any company that is managing event patterns, event-processing language (EPL), or event hierarchy metadata in a systematic way. Companies that use CEP may implement these things within individual CEP application projects, but don’t document them elsewhere in an organized fashion. Neither are we aware of any features in commercial SOA-oriented registries and repositories for managing topic trees, event patterns, EPL, causality, or event hierarchies. Repositories are generally extensible, so it should be possible for an adroit event administrator to extend a repository to handle these things. It’s also likely that repository vendors will add support for this in the future.
Companies should allow their portfolio of events and patterns to evolve organically, bottom up, as part of each application project’s development process. We don’t recommend developing a detailed company-wide “as is” inventory or “to be” architecture of event notifications and patterns. Similar activities for enterprise-wide SOA service models and enterprise data models were notoriously unsuccessful where they were attempted. Sweeping architecture projects require a large investment in staff time and have long-term payback, if they pay back at all. The resulting architecture documents don’t age well—their value degrades rapidly because changes in the IT portfolio and the business make the information obsolete. Moreover, the documents are commonly underused or entirely ignored by the project teams building application systems. The value of enterprise-wide architecture is inherently limited because the needs of future projects cannot be accurately anticipated in advance. By the time developers begin work on a new project, events and patterns designed months or years earlier will usually need to be changed.
SOA and EDA are complementary—they can be used together as event-driven SOA. Event-driven SOA can coexist with request-driven SOA and time-driven designs in the same application. Event processing is a departure from traditional IT practices in certain ways, so it may require companies to acquire additional expertise and software infrastructure. Companies should modify their IT architecture and SOA governance practices to support event processing.
3.145.82.191