13
Integrating via HTTP with RPC and REST

WHAT’S IN THIS CHAPTER?

  • An introduction to the idea of using HTTP to integrate bounded contexts
  • An introduction to the REST protocol
  • Guidance for choosing between RPC and REST when integrating with HTTP
  • DDD-focused examples of implementing RPC with SOAP and plain XML
  • Examples of implementing RPC using WCF and ASP.NET Web API
  • A discussion of how to use REST with DDD to achieve the fault tolerance and scalability of messaging systems while still being domain focused
  • An example of building a scalable, fault-tolerant, event-driven, RESTful distributed system using ASP.NET Web API
  • Guidance for enabling loosely coupled, independent teams when integrating bounded contexts with HTTP

Wrox.com Code Downloads for this This Chapter

The wrox.com code downloads for this chapter are found at www.wrox.com/go/domaindrivendesign on the Download Code tab. The code is in the Chapter 13 download and individually named according to the names throughout the chapter.

Hypertext Transport Protocol (HTTP) is a ubiquitous protocol that the billions of devices connected to the Internet understand. It can also be a discerning choice for integrating bounded contexts. Being so widespread, HTTP has clearly proven that it enables applications running on different hardware and software stacks to communicate relatively easily. This means that if you have bounded contexts using different technologies, HTTP can be very appealing. You saw in the previous chapter that although it is possible to integrate different messaging frameworks, there can be a lot of risky and time-consuming work involved. But because HTTP is a well-known standard, you can follow existing conventions when integrating with it.

Chapters 11, “Introduction to Bounded Context Integration,” and 12, “Integrating via Messaging,” showed you that integrating bounded contexts is not just about making software applications talk to each other. It’s also about providing scalability and fault tolerance. You may be wondering why there are so many messaging frameworks and middleware solutions if HTTP satisfies these needs. That’s difficult to answer with any certainty, but it does highlight the fact that HTTP is often overlooked when building event-driven distributed systems. However, REST is definitely starting to gain popularity as a choice for building distributed systems, and it is definitely an option you should at least consider. This chapter shows you why.

Although HTTP hasn’t traditionally been used to build reactive, event-driven systems, it has been massively popular for integrating applications using remote procedure call (RPC). In contrast to REST, there are thousands of examples of applications that use RPC to integrate. This means there’s a lot of real-world evidence showing its strengths and weaknesses. You learned about some of them in Chapter 11, and in this chapter, you will see concrete examples by building and comparing RPC and RESTful Domain-Driven Design (DDD) systems.

Whichever HTTP-based solution you choose, there are patterns and principles that synergize with DDD techniques to make domain concepts explicit. For example, Chapters 11 and 12 demonstrated how moving to an event-driven architecture based on domain events can have business and technical benefits. You’ll see in this chapter that using domain events as the messages sent between bounded contexts via REST can again be very expressive.

Events don’t always make sense as HTPP application programming interface (API) formats, though, especially when exposing your domain as APIs to external services. Consider an API that exposes a catalog of products: websites don’t want a full history of events, they just want to see the latest snapshot showing the most up-to-date information. So this chapter also contains examples for exposing domain concepts that aren’t events as HTTP APIs.

A final topic presented in this chapter is enabling loosely coupled teams to efficiently iterate their bounded contexts. Chapter 11 showed how concepts like Service Oriented Architecture (SOA) support this need in general, and Chapter 12 showed how to apply those concepts to messaging architecture. This chapter provides similar guidance for achieving loosely coupled teams when using HTTP as your integration protocol.

Why Prefer HTTP?

When the whole world is using HTTP on all five of the devices they own, it must have its positives. Here are a few of the reasons why integrating your bounded contexts with HTTP might make a significant difference to the success of the projects you are involved in.

No Platform Coupling

Each application or component that integrates with HTTP may be built using any technology thanks to HTTP being a platform-agnostic protocol. Not only is this beneficial for creating loosely coupled applications, but it can help to create loosely coupled teams that have few dependencies on each other.

Using HTTP, each bounded context must honor its public contracts, which are the HTTP request and response formats. Providing the contracts are adhered to, teams are then free to mercilessly refactor, rewrite their applications in new technologies, or continue to add business value at their own pace. All that matters is that they honor their public contract so that integration with other bounded contexts remains intact.

Everyone Understands HTTP

HTTP is everywhere. Almost all programming languages and run times have a wealth of libraries and support for using HTTP. So when integrating with HTTP, there is a massive amount of support available to you. In this chapter, you learn that .NET has a number of frameworks, including Windows Communication Foundation (WCF) and ASP.NET Web API, for building HTTP-based integrations.

Another advantage to everyone understanding HTTP comes when you need to expand your team. It’s easy to find a developer who understands HTTP, but it can often be more challenging to find a developer who understands messaging systems or particular messaging frameworks.

Lots of Mature Tooling and Libraries

On top of the modern frameworks and libraries for building HTTP-based integrations, some tooling is advanced. One example of this is the way Visual Studio generates classes for you when you point it to a particular type of web service. The classes provide methods that mimic the API of the HTTP web services, so you can write what appears to be standard object-oriented code yet is actually communicating across the network. This is demonstrated later in the WCF examples.

Dogfooding Your APIs

When all your communication is over HTTP, there may be no need to have dedicated channels for internal and external communication. In plain English, this means that you can build APIs that bounded contexts use to communicate, and third parties can use those same APIs. In contrast, messaging systems are almost always for internal use only, so APIs have to be produced as well.

The practice of using the APIs internally that you share with clients and partners is known as dogfooding. Dogfooding is desirable because it helps you get the same experience as your customers. If your API contains pain points that are putting customers off, dogfooding might help you find and remove them. Of course, in some situations, dogfooding has drawbacks and might not be the best approach. One example might be when you need to have stronger performance guarantees internally than externally.

RPC

If you want to build distributed systems that integrate with HTTP, one option is to use RPC. As discussed in Chapter 11, RPC’s “hide-the-network” abstraction can be useful when development speed is important or scalability needs are not too high. On the other hand, the inherent tight coupling associated with RPC can make scalability requirements and loosely coupled teams harder to achieve.

In the following section are examples that demonstrate the previously mentioned strengths and weaknesses of RPC. They allow you to start forming your own opinions and get a feel for where you might want to use RPC in the future.

Implementing RPC over HTTP

You have a few choices when it comes to implementing RPC over HTTP. The traditional choice has been to use a protocol called SOAP (Simple Object Access Protocol), which adds another layer on top of HTTP. In recent years, however, SOAP has seen a massive decline in popularity. Nowadays, the more modern approach is to simply use plain eXtensible Markup Language (XML) or JavaScript Object Notation (JSON) as the payload in an RPC call. So that you can make informed decisions, you’ll see both options in this section, starting with SOAP.

SOAP

SOAP fully embraces the concept of RPC by including rich information in the payload, such as type and function meta data. This makes it easy to convert the contents of a SOAP message into a method call on the remote receiver. Due to this richness and being massively popular with the previous generation of developers, the tooling support for SOAP is advanced, as you’ll see shortly.

To learn about SOAP and play with the advanced tooling that has been built around it, in this section you integrate two bounded contexts that form part of a social media application. For this scenario, envision that you are part of a start-up building a Twitter-like product that’s gaining traction in the market and a rapidly increasing user base.

For this example, you are going to use RPC to help the development teams move faster. Currently they have a single, monolithic, Big Ball of Mud (BBoM) application where bounded contexts are merely libraries that have a binary dependency on each other. This is causing problems to ripple across the entire business when changes to one bounded context are breaking others. You’re going to remove this problem by isolating each bounded context as a standalone application that can only be communicated with over HTTP. To make this transition as seamless and rapid as possible, you’ll use RPC to replace in-process method calls from one bounded context to another with RPCs over the network (completely removing the binary dependency). This will demonstrate how RPC requires few changes to your code and makes the network almost invisible.

Designing for RPC

Designing for RPC involves deciding which method calls will be RPCs across the network. Apart from that, your code will mostly look the same as it did running on a single machine. Figure 13.1 shows the new design for the current scenario. Note how it replaces methods between two bounded contexts with RPCs across the network.

images

FIGURE 13.1 The “find recommended users” use case.

In the (fictitious) current system, the Discovery bounded context is calling FindUsersFollowers() on the FollowerDirectory class, which belongs to the Account Management bounded context. This is a binary dependency that requires in-process communication. You can see the code demonstrating this in Listing 13-1.

Listing 13-1 shows the FollowerDirectory.FindUsersFollowers() method being called. This is the problematic method that couples two bounded contexts. It is going to be replaced with a similarly named RPC across the network, thereby removing the problematic binary dependency between the Discovery and Account Management bounded contexts.

Figure 13.1 also provides further background for the use case you are going to implement. You can see that the use case is triggered when a user logs in and arrives at her home page. When this happens, the business requirement is for users to see a list of recommended users whom they might want to follow. This is important to the business because it allows users to discover other users and hot topics so they continue to return to the site. An entire team of business people and developers is focused on helping users discover content. The team is known as the Discovery team, and the Discovery bounded context represents its area of the domain.

Implementing RPC over HTTP with WCF and SOAP

Integrating two bounded contexts using SOAP can initially be quite fast and relatively easy when using .NET’s Windows Communication Foundation (WCF). You’ll see this firsthand as you start to implement the use case shown in Figure 13.1 in the following sections.

Creating a WCF Service

To get started, you need to create a blank Visual Studio solution that will be home to all the bounded contexts (in this SOAP example). You can call this solution PPPDDD.SOAP.SocialMedia. The first bounded context to be created is Account Management. To add the Account Management bounded context, you need to add a new WCF Service Application to the project called AccountManagement, as shown in Figure 13.2.

images

FIGURE 13.2 Adding the Account Management WCF Service.

In the old monolithic application, as you saw in Listing 13-1, there was a class called FollowerDirectory that had a method called FindUsersFollowers. To turn this into an RPC call, you can simply add a WCF Service to the root of the project called FollowerDirectory. Then you can use WCF Service contracts to declare your RPCs, as shown in the next section.

Service Contracts

After you’ve added the WCF Service, two files are added to the root of the project: FollowerDirectory.svc.cs and IFollowerDirectory.cs. The latter is what Visual Studio uses to generate a public SOAP contract (using the Web Service Description Language, or WSDL). The former is the implementation; your custom code goes in it and is run when RPC calls are made at run time. You see this in action shortly, so don’t worry if it doesn’t make perfect sense.

WCF has two annotations: ServiceContract and OperationContract. ServiceContract is added to an interface to signify that the class contains methods that can be called as RPCs across the network. OperationContract then signifies which methods on an interface decorated with ServiceContract are the RPCs. Therefore, to create the FollowerDirectory.FindUsersFollowers() RPC call, you should apply those two attributes, as shown in Listing 13-2.

Listing 13-2 is all that Visual Studio and WCF requires from you to be able to generate the RPC infrastructure. You’ll see later that Visual Studio automatically generates proxies of these classes on clients of the web service. This “networking for free” is what makes WCF and SOAP so appealing to many.

Before your service will work, you need to provide an implementation in the FollowerDirectory.svc.cs file. You can see a basic implementation in Listing 13-3 that generates a few dummy Followers in-memory and returns them. You can update your FollowerDirectory with this implementation.

Testing WCF Services

You’re now in a position to test that you really can call FindUsersFollowers()over the network as an RPC. Visual Studio makes this easy with the test client it provides. To run the test client, highlight the FollowerDirectory.svc item in the Solution Explorer (not FollowerDirectory.sv.cs) and press F5. You will then see the test client as illustrated in Figure 13.3.

images

FIGURE 13.3 Visual Studio’s WCF test client.

To test your new service, just double-click its name (FindUsersFollowers) in the left-hand Explorer pane, and then enter a value in the Value column for the row accountId in the right pane. After doing that, if you click the Invoke button, your FindUsersFollowers RPC will be carried out over the network, and the results will be displayed in the lower half of the right pane, as shown in Figure 13.4.

images

FIGURE 13.4 Invoking an RPC in WCF’s test client.

If you want to see how the data was transmitted across the network, you can click on the XML tab at the bottom of the right pane. By doing that, you see the raw SOAP:


<s:Envelope xmlns:s="http://schemas.xmlsoap.org/soap/envelope/">
  <s:Header />
  <s:Body>
    <FindUsersFollowersResponse>
      <FindUsersFollowersResult
xmlns:a="http://schemas.datacontract.org/2004/07/
AccountManagement" xmlns:i="http://www.w3.org/2001/
XMLSchema-instance">
        <a:Follower>
          <a:FollowerId>follower_0</a:FollowerId>
          <a:FollowerName>happy follower 0</a:FollowerName>
          <a:SocialTags
xmlns:b="http://schemas.microsoft.com/2003/10/
Serialization/Arrays">
            <b:string>programming</b:string>
            <b:string>DDD</b:string>
            <b:string>Psychology</b:string>
          </a:SocialTags>
        </a:Follower>
        <a:Follower>
        ...
Creating WCF Service Clients

You’re now about to see that the WCF, SOAP, and Visual Studio combination does a lot of the hard work for you. You’ll first create the Discovery bounded context project, and you’ll then see how you can start making RPC calls to the Account Management bounded context just by pointing the Discovery bounded context to the uniform resource locator (URL) of the Account Management bounded context.

To begin, you need to add a new WCF Service Application to the solution called Discovery that represents the Discovery bounded context. Inside the Discovery bounded context, you then need to add a WCF Service called Recommender. Recommender provides the web services that the website uses to get the list of recommended users. You may find it helpful to quickly refer to the design in Figure 13.1.

Recommender has two responsibilities. First, it provides the API for clients to request recommendations. To fulfill that responsibility, it makes the RPC to the Account Management bounded context get an Account’s followers. To implement that, you need to have the Account Management project running. (Highlight it in the Solution Explorer and press Ctrl+F5.) You can then test that it is running by directly accessing it in a web browser at http://localhost:3100/FollowerDirectory.svc. If the page has the heading “FollowerDirectory Service,” things are working.

Next, you need to pass the URL to Visual Studio so it can generate the proxy classes. If you right-click on the References node for the Discovery project in the Solution Explorer and select Add Service Reference, the URL can be pasted into the Address field. All that’s left is to change the Namespace (at the bottom of the Add Service Reference dialog) to AccountManagement and click Go. Your screen should then resemble Figure 13.5, which shows the expanded FollowerDirectory node revealing the web service it has identified. Once you’re happy, you can click OK.

images

FIGURE 13.5 Adding a Service Reference in Visual Studio.

To see the generated proxy classes, you can inspect the AccountManagement item that was added to the Service References folder. Figure 13.6 shows the generated proxy classes. These are the classes that you will shortly instantiate in the Discovery bounded context and call methods on to invoke RPCs across to the Account Management bounded context (via SOAP/HTTP).

images

FIGURE 13.6 Generated proxy classes.

The last step is to build the Recommender web service that puts the generated proxy classes to work. Listing 13-4 and Listing 13-5 show a basic implementation of the IRecommender and Recommender that do just enough to demonstrate the RPC call. You need to add these to your Discovery project.


In Listing 13-5, an instance of AccountManagement.FollowerDirectoryClient is created. It is a proxy class that Visual Studio generated when adding the Service Reference. When its FindUsersFollowers() method is called, it fires an RPC across the network and calls into the code you added in the Account Management bounded context. The main takeaway here is that WCF and Visual Studio took care of all the network-related plumbing. Most of the code you added would look very similar even if there was no network involved.

You can test that everything is successfully working by setting both projects to start up (as shown in the previous chapter by right-clicking the solution in the Solution Explorer and choosing Set Startup Projects) and then pressing F5 on the Recommender.svc in the Solution Explorer. The WCF test client pops up again, and this time you need to invoke GetRecommendedUsers(), as demonstrated in Figure 13.7.

images

FIGURE 13.7 The RPC must have occurred.

The result of the RPC call in Figure 13.7 is the hard-coded data that is returned by the web service you created. This indicates that the RPC call between the two bounded contexts successfully occurred. Conclusively, this example’s aim of integrating the Account Management and Discovery bounded contexts without a binary dependency has now been achieved.

SOAP’s Decline

Although there are many existing public SOAP APIs, new APIs just aren’t being built with SOAP anymore. One of SOAP’s big pain points is the complexity and verbosity of its message format, which you saw a glimpse of earlier. People are critical of needless complexity, and they often cite SOAP as a perfect example of unnecessary complexity. Accordingly, you should be careful about exposing public SOAP APIs, but you shouldn’t feel too concerned about using SOAP internally if it suits your needs.

The modern preference for RPC over HTTP is to use lightweight, plain XML or JSON payloads. The next section shows examples of this using ASP.NET Web API.

Plain XML or JSON: The Modern Approach to RPC

To see how you can integrate over HTTP without the complexity and verbosity of the SOAP format, in this section, you re-create the social media SOAP integration instead using the relatively lightweight JSON. The following is the JSON version of the SOAP payload shown earlier. You may want to go back and compare to fully appreciate how the JSON is far more compact.


{
    "followers": [
        {
          "accountId":"34djdlfjk2j2",
          "socialTags": [
              "ddd","soa","tdd","kanban"
           ]
        },
        ...
    ]
}
Implementing RPC over HTTP with JSON Using ASP.NET Web API

The ASP.NET Web API is Microsoft’s latest framework for creating web services. Later you will see how it can be a good choice for building RESTful APIs. But for now you’ll see how it can make life easy when building JSON RPC APIs. As a starting point, you need a new blank Visual Studio solution called PPPDDD.JSON.SocialMedia. This solution needs to be populated with an ASP.NET Web Application project called AccountManagement. As you go through the creation process for the project, you need to select the Empty template and check the Web API check box. Once it’s created, you need to configure the project to always start on port 3200.

Controllers contain the code that will be run when web requests are made to your Web API. You can see controllers as an opportunity to express domain concepts from the ubiquitous language (UL), although you should be careful about putting domain logic in them. Some developers conceptualize controllers as Application Services.

To start with this example, you need to add a class called FollowerDirectoryController to the Controllers folder in the root of the project; Web API requires that controllers be placed in this folder. The code for FollowerDirectoryController is shown in Listing 13-6.

For demonstrative purposes, the FollowerDirectoryController in Listing 13-6 returns a list of hard-coded Followers (as JSON). In a real application, this class would likely perform database lookups or API calls to get the required follower information.

After starting the application (by pressing F5), you can test the new web service by hitting it in the browser. Using Web API’s default conventions, it is accessible at http://localhost:3200/api/followerdirectory/getusersfollowers?accountId=123, where the value for the accountId parameter is variable. (It can be anything in this example.) Figure 13.8 shows an example of hitting the API from a browser and viewing the JSON response.

images

FIGURE 13.8 Viewing the output of a Web API controller in a browser.

You can see with this approach that you had to do a tiny bit of extra work setting up the controller compared to SOAP and WCF, but the data sent across the wire is so much cleaner and lighter that it’s easy to understand what is happening. This is also a bonus when it comes to debugging problems.

Lacking the richness of meta data provided by SOAP, it is not possible to automatically generate proxy classes for a plain JSON API. It still doesn’t have to be a lot of extra work, though, as you’ll now see.

To create a client of your JSON API, you need to add a new ASP.NET Web Application, called Discovery, to represent the Discovery bounded context. Inside the Discovery project, you need to add a class called RecommenderController in the Controllers folder. The code for this class is shown in Listing 13-7. For it to compile, you need to install HttpClient and ServiceStack.Text by running the following commands in the Nuget Package Manager Console:


Install-Package Microsoft.AspNet.WebApi.Client -Project Discovery
Install-Package ServiceStack.Text -Project Discovery

As Listing 13-7 shows, the logic for this implementation is similar to the WCF approach in the previous example. However, with this solution, you have to do the manual work of making the HTTP request and parsing the response yourself. As you can see, though, there are feature-rich libraries, provided by Microsoft and the community, that do a lot of the laborious work for you.

To test that everything works as intended, you need to set both projects as start-up projects (as shown in previous examples). You then need to navigate to the URL of the GetRecommendedUsers API you just created. The URL is http://localhost:{port}/api/recommender/getrecommendedusers?accountId=123 depending on which port the Discovery bounded context is using on your machine. A browser automatically pops up informing you of the port number when you press F5 inside the solution (or you can manually specify a port, as shown earlier in this chapter).

Choosing a Flavor of RPC

Integrating bounded contexts with RPC over HTTP was just demonstrated using two common approaches. With WCF and SOAP you can add a few attributes to your domain model and suddenly it becomes a distributed system free from binary dependencies between bounded contexts. In turn, this allows teams to be more independent and not have to worry about breaking other bounded contexts. One problem with SOAP, though, is that the format is complex and verbose; in many cases, RPC is used for simple integrations, so this doesn’t seem logical. This is why plain XML or JSON is the more popular choice today.

Both options, however, have the flaws inherent to RPC that were mentioned in Chapter 11. First, they can be harder to scale efficiently. Looking back at the diagram in Figure 13.1, consider a forthcoming request from the business to improve the speed at which recommended followers are displayed onscreen. Both the Discovery bounded context and the Account Management bounded context may need to be scaled to provide an overall performance improvement. If the chain or RPCs spanned three bounded contexts, then three bounded contexts may need to be scaled due to the temporal coupling.

In terms of fault tolerance, there are also worrying signs. If the Account Management bounded context goes down, the Discovery bounded context also goes down because it cannot RPC across and get the followers. Again, this is due to the temporal coupling.

It may seem like you need to choose between integrating with HTTP for loose platform coupling and having a scalable, fault-tolerant system that uses a messaging framework like NServiceBus. But that’s completely untrue. The next section of this chapter shows that you can have the scalability and fault tolerance of a messaging system and the loose platform coupling of HTTP by combining the principles of reactive programming with REST.

REST

In this section, you rebuild the social media integration for a third time. This time, however, you completely redesign for scalability, fault tolerance, and development efficiency using event-driven REST. This third design iteration still relies only on HTTP instead of a heavyweight messaging framework. But this version still follows the reactive, SOA, and loose coupling principles presented in Chapter 11.

REST is a misunderstood and misused term, though. Before you build any RESTful applications, it is crucial that you understand what REST really is.

Demystifying REST

REST was introduced to the world by Roy Fielding. He created REST as an architectural style based on the principles that make the Internet so successful. REST has a number of fundamental concepts, including resources and hypermedia, which provide the platform for evolvable clients and servers.

Resources

HTTP requests to a RESTful system are for resources. Responses contain the requested resource (if successful). Resources can be things like documents, such as web pages, or media, such as MP3 files. Resources work well with DDD because concepts in your domain can be expressed as resources—further spreading the UL. As a basic example, in a financial domain, there could be transactions that transfer funds from one account to another. The UL would contain an entry for each type of transaction, such as B2B Transaction or Personal Transaction. These transactions could be exposed as resources accessible from the uniform resource identifiers (URIs aka URLs) http://pppddddemo.com/B2bTransactions or http://pppddddemo.com/PersonalTransactions. This is completely different from RPC, where requests and responses simulate method calls using imperative naming.

Resources have a one-to-many relationship with representations. In other words, when requesting a resource, you can specify a different protocol or a different content type, such as JSON, XML, or HTML. Each response will be the same resource but will be presented differently according to the syntactic rules of the requested format. You will see shortly that clients of RESTful APIs choose a format by specifying the required Multi Media Encoding (MIME) type in HTTP’s “Accept” header.

Here are a couple other key details relating to resources:

  • There is a many-to-one mapping between URIs and resources. Multiple URIs, therefore, can point to the same resource.
  • Resources can be hierarchical. For instance, to expose a user’s address, you could use a URI such as /accounts/user123/address. Each segment in the path is a child resource of the segment to its left, just like a path in a file system.

Hypermedia

Humans browsing the web go from web page to web page by clicking links. This is hypermedia in action. By returning hyperlinks in resources, computers, too, can move from resource to resource simply by following links. This is demonstrated later in the chapter.

Hypermedia presents another opportunity for DDD practitioners to express their domain more explicitly. Imagine a car insurance policy. Each step of the application process could be expressed as links in hypermedia to the next possible steps—expressed using the UL. Not only does this express domain concepts, but it can be used to model workflows or domain processes.

Using hypermedia in machine-to-machine communication means that clients of a RESTful API are not coupled to its URIs. This leads to decoupled clients and servers, free to evolve independently. This is one of the fundamental reasons people have for using REST, because SOAP-based solutions tend to be brittle due to tightly-coupled clients and servers.

Statelessness

Application state, such as the items in a user’s shopping cart, arguably should not be stored on the server in a RESTful application. This provides a foundation for fault tolerance and scalability because clients do not have to keep hitting the same machine that contains the state. Application state should therefore be kept on the client and sent to the server every time the server requires it. Going back to the shopping cart example, in a stateless REST API, the cart items could be stored in cookies and sent to the server with every request. Any problems a server may have, therefore, should not preclude other servers stepping in to take over from it.

REST Fully Embraces HTTP

HTTP has a number of conventions that provide the basis for scalability, fault tolerance, and loose coupling. Because REST is based on these principles that make the web successful, it is essential that you at least have a basic understanding of HTTP’s features before you build RESTful applications.

Verbs

HTTP provides a uniform interface for interacting with resources. For example, to fetch a resource, you send a GET request to its URI. To delete the same resource, you send a DELETE request to the URI. You can use PUT requests to create a resource at the desired URI. For adding items to a collection, you can use the POST verb. Examples of how these common verbs can be used to interact with resources are shown in Table 13.1.

TABLE 13.1 Using HTTP Verbs to Create, Read, Update, and Delete Resources

URI Verb Action
/accounts/user123 GET Read/fetch the resource
/accounts/user123 DELETE Delete the resource
/accounts/user123 PUT Create the resource
/accounts/user123/addresses POST Update the resource

By having a single set of verbs that are applied uniformly across the entire Internet, it is easy to build generic API clients and infrastructure components, such as caches, that understand the web’s conventions. Think about it—every programming language has libraries for working with HTTP. This is the power of common conventions, and you can also harness them when integrating bounded contexts.

Status Codes

Complementary to HTTP’s verbs are its status codes. As with verbs, having a common set of status codes means any agent on the web understands the conventions. For example, whenever you make a request to a URI that doesn’t exist, you get an HTTP 404 status code back because it is a common standard that almost all systems adhere to.

HTTP status codes are grouped by their first digit, as shown in Table 13.2. Within each group are more specific status codes.

TABLE 13.2 HTTP Status Code Groups

Status Code Group Definition Example
1xx Informational This is rarely used.
2xx Success The resource you requested is returned.
3xx Redirection The resource you requested has been moved to another address.
4xx Client error You supplied an invalid parameter value.
5xx Server error There is a bug in the API code preventing the resource from being returned.

Wikipedia has an accessible introduction to HTTP status codes if you would like to learn more (http://en.wikipedia.org/wiki/List_of_HTTP_status_codes).

Headers

Aside from the URI and body of an HTTP request/response, you’re probably familiar with headers that provide extra information. RESTful systems frequently use HTTP’s caching headers that are covered later in this chapter.

Most RESTful applications require some level of security. Because a property of REST is statelessness, it’s often recommended that authentication and authorization details are communicated in headers using protocols such as OAuth.

For more information on headers that you can use in HTTP requests and responses, Wikipedia has an accessible, yet detailed entry (http://en.wikipedia.org/wiki/List_of_HTTP_header_fields).

What REST Is Not

REST can be a good choice for many projects and a suboptimal choice for others. Whatever choice you make, it’s important to name things accordingly for accurate communication. Unfortunately, REST is a much misused term. So before calling your API RESTful, you should check that as a bare minimum it centers on hypermedia and resources.

After numerous high-profile abuses of the term REST, in 2008 Roy Fielding was compelled to write a blog post demanding that people meet some basic requirements for their API to be called RESTful (http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypertext-driven). Another countermeasure to the abuse of the term REST is the Richardson Maturity Model (http://martinfowler.com/articles/richardsonMaturityModel.html). This is basically a barometer indicating how close your API is to being RESTful.

REST for Bounded Context Integration

The context for the remainder of this chapter is a redesign of the RPC examples earlier in the chapter. With some killer new features and viral marketing campaigns, user sign-ups for the fictitious social media start-up have again increased exponentially. The business wants to cash in on its success by adding premium accounts that are promoted to regular users. Premium accounts are a way for companies to gain followers on the social media website so they can profit from enhanced brand loyalty.

Unfortunately, as happens on many occasions, the RPC-based integration is not scaling well. It was the perfect choice initially for its time-to-market advantages but is now preventing features from being delivered because developers spend too long fire-fighting. So before you can add new features, you need to stabilize the system to support the rapid growth of the business.

This new version of the system is based on an event-driven architecture. Interestingly, though, instead of taking the common message bus approach (as per the previous chapter), this system uses REST and HTTP to preclude technology/vendor lock-in for the lifetime of the system.

Designing for REST

As suggested in the previous chapter, a small amount of design can create a shared vision and a deeper understanding of how the system being built addresses its functional and nonfunctional requirements. Some steps for designing a system that uses REST for integration will differ from those used to design a messaging system. Mostly the steps will be similar, though, including the first step: start with the domain.

DDD

Expressing business policies, using the UL, in a set of sketches is again a useful first step in designing a system. It nearly always makes sense to work out what problem you need to solve and what domain processes you need to model before you decide on a technical solution.

Figure 13.9 is a component diagram illustrating the new event-driven design of the Recommended Accounts use case. As with the messaging solution in the previous chapter, it focuses on domain commands and events. In fact, this design could be for a messaging system, because the diagram focuses on the flow of messages during the business use case and is independent of technology choices.

images

FIGURE 13.9 Component diagram for the Recommended Accounts use case.

In Figure 13.9, you can see two key domain events. First, there is Began Following. Throughout the company, every member of staff understands that a Began Following domain event occurs when one account starts following another. This is part of the UL and one of the core concepts of the business. The other domain event shown is Premium Recommendations Identified. This is also part of the UL, representing occurrences where the Discovery bounded context has identified a premium account that a regular account may like to follow. This is a crucial domain concept, because promoting premium accounts to regular users is central to the new business model.

SOA

Chapter 11 showed you that following the principles of SOA can lead to loosely coupled bounded contexts. In turn, this can provide the platform for high-performing teams. SOA’s principles are technology agnostic, so you can apply them when building systems with HTTP.

By isolating bounded contexts each owned by a single team, loose coupling is within reach. Each team is free to develop its features in line with its business priorities, free of cross-team distractions or dependencies. All that changes when using SOA with HTTP compared to messaging is that the contract between teams is no longer classes in code, but the format of HTTP requests and responses. Upcoming examples fully demonstrate this.

Event-Driven and Reactive

You saw in Chapters 11 and 12 that asynchronous messaging, based on reactive principles, was the recommendation for building fault-tolerant scalable systems. This is also the case when integrating with REST and HTTP. As you are probably aware, HTTP doesn’t inherently support publish/subscribe, so there’s no way to push out events to subscribers as they occur like you can with a message bus. Instead, with REST, clients can poll for changes. Polling generally has negative connotations in terms of scaling, but utilization of HTTP’s caching conventions can negate this problem.

Figure 13.10 is the containers diagram for the new Reactive, RESTful social media system that will be built in the remainder of this chapter. Note how there are some similarities to a messaging system: each component is small so that it can be scaled independently according to business needs; bounded contexts do not share dependencies such as databases. Additionally, thanks to HTTP, each team can use any technologies it prefers within its bounded contexts (which was more difficult to achieve in the previous chapter). These traits support scalability, fault tolerance, and development velocity.

images

FIGURE 13.10 Containers diagram of Discovery, Account Management, and Marketing bounded contexts.

An important design consideration for scalability is the granularity of projects. For your HTTP APIs, you could put all your endpoints in one project, but then you couldn’t easily deploy them independently based on their individual scalability needs. The loose recommendation in this chapter is to start with one project per resource. Examples of this in Figure 13.10 are the stand-alone projects for the entry point resource and the Accounts resource.

When you have nested resources, you may also want to consider moving them into their own project sometimes. The main factors for doing so will usually be complexity of having extra projects traded off against the ability to scale APIs independently. You may even want to move individual request handlers into their own project if specific use cases have demanding scalability requirements.

HTTP & Hypermedia

Hypermedia is fundamental to REST because it is the factor that enables clients and servers to evolve independently of one another. So it can be useful to think a little about the hypermedia contracts up front before you start to build the system. (You can still iterate as you go along.) A good way to design REST workflows is to use sequence diagrams like the one shown in Figure 13.11 that demonstrates the event-driven Add Follower use case.

images

FIGURE 13.11 Flow of HTTP requests for the Add Follower use case.

As you can see, clients are only coupled to the entry point URI. From there, they make requests to other services by following hypermedia links provided in responses. For instance, clients wanting to initiate the Add Follower use case (akin to a domain command) on the Account Management bounded context are only coupled to the URI of the entry point resource—/accountmanagement. From then on, clients merely follow links in the hypermedia, returned as HTTP responses, until the Followers resource is reached.

In the containers diagram shown in Figure 13.10, there is an indication next to each HTTP endpoint of its content type. Most of them show application/hal+json. HAL stands for Hypertext Application Language; it is essentially just well-known existing content types—XML and JSON—with conventions for representing hypermedia links. You can learn about the HAL standard in depth on Mike Kelly’s blog (http://stateless.co/hal_specification.html). The examples in the remainder of this chapter use application/hal+json, so you will still learn the basics just by reading this chapter.

Another content type shown on the containers diagram is application/atom+xml, which denotes Atom. Atom is a common standard for producing RSS feeds and thus is a great fit for representing lists of events. This is precisely how it is used by the Account Management bounded context—to represent the list of Began Following events that have occurred.

Using Atom as a feed of events is the main building block for building event-driven distributed systems with REST in this chapter. It doesn’t have to be Atom, but Atom’s popularity means you should definitely consider it.

An example of polling an Atom feed for domain events is shown in Figure 13.12. This diagram shows the flow of HTTP messages involved for polling the Began Following Atom feed that you will build later in this chapter.

images

FIGURE 13.12 Flow of HTTP requests for polling and consuming the Began Following event feed.

Building Event-Driven REST Systems with ASP.NET Web API

Now that you’ve heard all the theory, it’s time to start building the RESTful, event-driven version of the social media application. To keep the examples concise and focused on essential patterns, you will just be building the Account Management bounded context and some of the Discovery bounded context. From these examples, you will learn enough to begin using the concepts on your own projects.

An outside-in approach will be taken to build the RESTful social media system; you will start by adding the hypermedia entry point to the Account Management bounded context. You’ll then create the Accounts API. After that you’ll create the Atom feed that publishes Began Following events. Finally, you’ll create the consumer of the Began Following feed that resides in the Discovery bounded context.

During the upcoming examples, all code will live in the same Visual Studio solution for convenience. But when you’re building RESTful systems that integrate over HTTP, this is not necessary. In fact, you may be using different technologies where it is not even possible to share a solution. It’s up to you to decide what you think is best; having completely separate code repositories for each part of the system encourages looser coupling, but keeping code close together may help to share the bigger picture.

To begin building the new RESTful social media system, you can start by creating a new blank Visual Studio solution called PPPDDD.REST.SocialMedia.

Hypermedia-Driven APIs

Hypermedia is at the heart of REST. Accordingly, you will now see how to build hypermedia APIs in .NET with ASP.NET Web API. Although the implementation details will vary from framework to framework, the concepts are framework agnostic and will apply to any tools you may decide to build hypermedia APIs with.

As you’ve seen, the key benefit of hypermedia is that it decouples clients and servers, allowing independent evolution. But clients must know something up front about the API. This is the role of the entry point resource that clients should couple themselves to.

Entry Point Resource

When clients want to interact with a REST API, they start by requesting the entry point resource. From then on, they mostly just follow links in the hypermedia that is returned. Choosing where to locate your entry point(s) has a number of considerations that you should take into account. For instance, the design in Figure 13.10 chooses to have a single entry point per-bounded context. But you could have a single entry point for the entire system or go more fine-grained and have an entry point per top-level resource. Ultimately, you have to decide on a per-project basis how much of the system you want to expose via entry point resources.

Designing an entry point involves identifying the initial resources and transitions that should be available to consumers of the API. The Account Management bounded context’s entry point will be the list of top-level resources. In this example, that is just the Accounts resource. From the Accounts resource, hypermedia will link to individual accounts, and from individual accounts to details of those accounts, exposed as child resources, such as its followers. This is just repeating what you saw in the sequence diagram earlier in the chapter.

As you’re starting to see, links are the building blocks of hypermedia. But for API clients to follow links, they need to be able to decide which link provides the transition they are looking for. This is the role of link relations, which indicate what the link represents (or, more specifically, its relationship to the current resource). For example, for a resource that has many pages, to be hypermedia-compliant, it would need a link with the relation Next, which clients can follow to the next page. You’ll see a number of links and relations in the upcoming examples.

To build the API that produces the Account Management entry point, you need to start by adding a new ASP.NET application to the solution called AccountManagement.EntryPoint. This follows the convention of naming API projects based on the format {bounded context}.{Resource}. When adding the project to your solution, be sure to select the empty template, and be sure to check the Web API check box.

One decision still needs to be made before you can add the endpoint that produce the entry point resource. You need to choose a media type that supports hypermedia.

HAL

Traditionally, XHTML has been used as the hypermedia format for REST APIs. Unfortunately, it’s undesirable due to its verbosity (especially if you’re trying to escape from SOAP). Fortunately, though, a relatively new standard is available and gathering traction. This new standard is HAL, which was briefly introduced earlier in the chapter, and it comes in two main flavors: XML and JSON. Essentially, both of those well-known formats have been extended with specific conventions for representing links. This provides the hypermedia benefits of XHTML without the verbosity as the following example demonstrates.


{
  "_links": {
    "self": {
      "href":"http://localhost:4100/accountmanagement"
    },
    "accounts": {
      "href":"http://localhost:4101/accounts"
    },
 }
}

The entry point resource shown in the preceding snippet demonstrates the conventions for representing links in HAL (JSON). All links must be defined within an element at the root of the resource called _links. Each link begins with its relation (self and accounts in the example). Each link also contains an href, which is the URI of the resource it points to. Only the self link, which points to the current resource, is mandatory; all other links are optional.

To build the entry point resource API, you first need to configure the project to start on port 4100. (You can set this in the project’s properties on the Web tab.) You then need to configure a URI for the entry point by adding a route inside Web API’s WebAPIConfig, as shown in Listing 13-8. Your WebApiConfig file will be located in the App_Start folder that sits in the root of the project.

If you’re not familiar with Web API’s routing syntax, in Listing 13-8, the definition of the Entry Point route ensures that whenever a request comes in for the path /accountmanagement, the Get() method on a class called EntryPointController is invoked. Before you can implement that controller, you need to install a Nuget package that adds support for HAL in Web API. As you will see, this is an incredibly usable library that takes all the effort out of creating HAL APIs. To install WebApi.Hal into the AccountManagement.EntryPoint.Api project, you need to run the following command in the Nuget Package Manager Console:


Install-Package WebApi.Hal -Project AccountManagement.EntryPoint.Api

Once WebApi.Hal is installed, you need to tell Web API to make HAL (JSON) the default media type by updating your Global.asax.cs, as per Listing 13-9. This also sets HAL (XML) as the second preference. The two formatters, JsonHalMediaTypeFormatter and XmlHalMediaTypeFormatter, belong to the WebAPI.Hal package you just installed.

All dependencies are installed and configuration is now complete, paving the way for you to complete the Entry Point API by implementing the EntryPointController. You can achieve this by first adding a class called EntryPointController to the Controllers folder that sits at the root of the project. Once you’ve added the class, you can then replace the contents of the file with the code in Listing 13-10.

The code in Listing 13-10, when executed, will return the entry point resource as HAL-JSON (by default). This is because EntryPointRepresentation inherits from the Representation base class—a class that the WebApi.Hal library provides. When a class inheriting from Representation is returned from a controller method, the JsonHalMediaTypeFormatter will convert it to HAL-JSON.

Inside Get(), the code declaratively maps onto the response format. You can see two links are being generated in this code: Href and Links (a collection with just one link). These are the two links that will appear in the response. You can see all this in action by testing out what you have so far.

Testing HAL APIs with the HAL Browser

A huge benefit of building hypermedia APIs on top of common standards is that it is easy to create generic clients that can explore APIs built using those standards. One such tool for HAL is the Hal browser, a small web application that allows you to consume and interact with HAL APIs. You’ll now see how to use the HAL browser to test the Entry Point API you have just built.

To install the HAL browser, you have to download it (https://github.com/mikekelly/hal-browser/archive/master.zip) and then unzip it. Finally, you need to copy the contents of the unzipped folder into the root of your AccountManagement.EntryPoint.Api project’s folder in Windows Explorer. If you open Windows Explorer at the root of the AccountManagement.EntryPoint.Api project, there should be a file called browser.html. If you don’t see it, the file is in the wrong location.

If you are happy that the files are copied into the correct location, you can press F5 to run your project. In the browser that Visual Studio starts up for you, you can access the HAL browser by navigating to http://localhost:4100/browser.html. (This assumes that you configured this project to always use port 4100, as explained previously.) After navigating to browser.html, if everything is working correctly, you will see the HAL browser, as per Figure 13.13.

images

FIGURE 13.13 Accessing the HAL browser.

When you do access the HAL browser, you’ll notice it doesn’t show your entry point resource. This is because the HAL browser looks for API entry points at the default path (/). To remedy this, simply enter /accountmanagement into the HAL browser’s navigation bar (below the Explorer label) and click GO. You should then see the raw entry point resource (on the right side) and the interactive tools (on the left side), as shown in Figure 13.14.

images

FIGURE 13.14 Viewing the entry point resource in the HAL browser.

At the moment, the HAL browser provides little benefit because the links in the entry point resource point to nonexistent resources. So this is your next task: to implement the Accounts API. As the entry point resource in Figure 13.14 shows, the Accounts resource needs to be accessible at http://localhost:4101/accountmanagement/accounts.

URI Templates

In building the Accounts API, you will see how to handle a common concern people have for hypermedia: inefficient navigation. Consider an API that exposes lots of data, such as thousands or millions of accounts. Clients may have to navigate hundreds of links to find the resource they want by successfully following links to the next page. Obviously, this can be massively inefficient for the client and the server—especially for public APIs with many concurrent clients. This problem is solved by using URI templates.

The following sample shows the Accounts resource, as HAL (JSON), that you are going to create an API for shortly. Look for the URI template; it’s the link whose templated attribute is set to true:


{
  "_links": {
    "self": {
      "href":"http://localhost:4101/accountmanagement/accounts"
    },
    "alternative": {
      "href":"http://localhost:4101/accountmanagement/accounts?page=1"
    },
    "account": [
      {
        "href":"http://localhost:4101/accountmanagement/accounts/{accountId}",
        "templated": true
      },
      {
        "href":"http://localhost:4101/accountmanagement/accounts/123"
      },
      ...
}

To represent URI templates, not only do you need to set the templated attribute to true, but you must also add placeholder sections in the URI. In the Accounts resource response just shown, you can see the placeholder is {accountId}. Clients of the API can then replace the placeholder with the ID of the account they are looking for. In doing so, you cut down hundreds of potential requests into just a single one. Creating URI templates with WebApi.Hal requires little effort, as you will now see while creating the Accounts API.

You can create the Accounts API by adding a new project called AccountManagement.Accounts.Api. You need to set this as a start-up project and ensure that it runs on port 4101. Once it’s created, you can then add a reference to WebApi.Hal by running the following command in the Nuget Package Manager console:


Install-Package WebApi.Hal -Project AccountManagement.Accounts.Api

The final configuration step is to set HAL as the default content type, as shown previously in Listing 13-9.

Two URIs will be exposed by the Accounts API. Initially, clients will click the Accounts resource at /accountmanagement/accounts, which represents the entire list of accounts. From there, clients will then navigate to individual accounts using the URI template contained within the Accounts resource—accountmanagement/accounts/{accountId}. To declare these routes in your project, the WebApiConfig in your AccountManagement.Accounts.Api project should resemble Listing 13-11.

As the route declarations in Listing 13-11 show, a class called AccountsController needs to be added to the Controllers folder (still inside AccountManagement.Accounts.Api). It needs two methods—Index() and Accounts()—as dictated by the route definitions. Starting with just Index(), the AccountsController should initially contain the code shown in Listing 13-12.

Most of the code in Listing 13-12 will be familiar from Listing 13-10, but it is worthwhile noting the additional links. The alternative link relation represents links that have a different URI but point to the same resource (self). One benefit of this is that clients can cache more efficiently by treating both URIs as the same resource. Below the alternative link is the templated account link, which WebApi.Hal automatically marks as templated because the href contains a placeholder.

Before you can test the new API in the HAL browser, you need to enable Cross-Origin Resource Sharing (CORS). This is because the Accounts resource is provided by a different vhost (port 4101 as opposed to 4100). The instructions for enabling CORS are detailed on the ASP.NET website (http://www.asp.net/web-api/overview/security/enabling-cross-origin-requests-in-web-api). Basically, you need to do the following:

  1. Add the CORS package to the project by running the following command inside the Nuget Package Manager Console:

    
    Install-Package Microsoft.AspNet.WebApi.Cors -Project AccountManagement.Accounts.Api
  2. Configure CORS in WebApiConfig, as per Listing 13-13.

If you run the HAL browser as before, starting at the entry point and following the link to the Accounts resource, you will see the URI template link, as per Figure 13.15.

images

FIGURE 13.15 Following the Accounts link in the HAL browser.

URI templates are not all or nothing; you can combine them with normal links, even for the same resource. Figure 13.15 contains two nontemplated “account” links that illustrate this. They point directly to Account resources. For those links to work, though, you need to add an Account() method to the AccountsController. (This is determined by the route entry shown in Listing 13-11.)

An initial implementation of Account(), which returns just canned data, is shown in Listing 13-14. You can add this to your AccountsController below Index(). You also need to add the AccountRepresentation and Account classes from Listing 13-15. (You can put them all inside the AccountsController.cs file.)


A resource’s fields (as opposed to its links) are represented as standard JSON in an HTTP response. In Listing 13-15, you can see that an AccountRepresentation has the properties AcountId and Name. Therefore, if you navigate to an Account resource using the Hal browser, you will see these properties represented as plain JSON. Figure 13.16 also shows this.

images

FIGURE 13.16 Resource’s data fields are represented as plain JSON.

Figure 13.16 also shows some noteworthy links. In particular, the three links that point to child resources of this account are followers, following, and blurbs. You’re going to build the followers endpoint in the next section, but the other two links are just to give you an idea of how a resource with many child resources could look. Building the followers endpoint is where you will start to build the event-driven parts of the system using the Event Store.

Persisting Events with the Event Store

Most of the infrastructure is in place for you to learn about building event-driven systems with REST. The first part of your learning involves storing events. There are many ways you can achieve this, such as writing events to a text file log or using a table in a SQL database. But in this example, you will see a purpose-built tool—the Event Store—that is the work of popular DDD practitioner Greg Young and his team (www.geteventstore.com).

To learn about storing events, you need to build a new endpoint in your Accounts API that returns the followers of an account. This endpoint supports the ability to add new followers to the collection by posting to it. This flow was previously illustrated in Figure 13.11. As usual, the first step to creating a new endpoint for exposing resources is to start with the route definition. Listing 13-16 shows how your WebApiConfig should be updated to add the route definition for the Followers endpoint.

Listing 13-16 shows that the Followers resource will be served up by a method called Index() on a controller called FollowersController. You can create that controller by adding a class called FollowersController, in your project’s Controllers folder, that resembles Listing 13-17.

The implementation of Index() in Listing 13-17 merely returns a canned response, parameterised with the passed-in Account ID. This is not the important part of this example—persisting an event in response to data being posted to the endpoint is the important and exciting part. You can see that process being triggered in Listing 13-18, which shows the code that needs to be added to your FollowersController directly below GetFollowers(). Also, you need to add the BeganFollowing class that is shown in Listing 13-19.




IndexPOST(), shown in Listing 13-18, responds to post requests for /accountmanagement/accounts/{accountId}/followers. You can see this because the method is attributed with the HttpPost attribute. Because of another method called Index () in the same file, the ActionName attribute indicates that it should still respond to requests for the Account Followers route even though this method is named IndexPOST().

The crucial event-persistence mechanics are not included in Listing 13-18. You can, though, see a call to EventPersister.PersistEvent(). This is the class that handles event persistence. You can see the contents of it in Listing 13-20, which shows the bare-minimum functionality for persisting events to the Event Store. You need to add this class to your project. For convenience, you can put it at the bottom of the AccountsController.cs file (but outside of the AccountsController class). Before adding the EventPersister, though, you need to install the Event Store C# client with the following command:


Install-Package EventStore.Client -Project AccountManagement.Accounts.Api

For the EventPersister to compile, you need to include the following using statements:


using EventStore.ClientAPI;
using Newtonsoft.Json;
using System.Net;
using System.Text;

Of the code shown in Listing 13-20, there are two key details to focus on: the event is converted to JSON (and then binary), and it is appended to a stream—the BeganFollowing stream in this case. You will get a better understanding of how all of this works once the Event Store is up and running.

Installing and Starting the Event Store

This example uses version 2.0.1 of the Event Store (http://download.geteventstore.com/binaries/EventStore-OSS-Win-v2.0.1.zip). Once you’ve downloaded it, you just need to extract the archive into a directory and then run the following PowerShell command from that directory (as Administrator):


./EventStore.SingleNode.exe --db .ESData

To confirm that the Event Store has started up successfully, you should be able to access the management application by going to http://localhost:2113. As confirmation of success, you are presented with the welcome page shown in Figure 13.17. If you don’t see the welcome screen, double-check that you ran PowerShell as Administrator. Also, look to see if there were any errors printed on the PowerShell console. If you do see the welcome page, that means the Event Store is running and is now patiently waiting to persist all your events.

images

FIGURE 13.17 The Event Store’s admin UI.


Viewing Persisted Events with the Event Store Admin UI

You just saw a glimpse of the Event Store admin UI, and now you will explore some of its key features as you view events that are being created via the Accounts API. To get events into the system, you need to post details of new followers. For demonstration purposes, you can do all this through the HAL browser by going to the entry point (/accountmanagement in the Hal browser) after you have started the system. You then need to follow the link to the Accounts resource. From the Accounts resource, you need to follow the link to one of the dummy accounts, and from there follow the link to its followers resource.

If you want to post to the followers resource, you need to click the orange button in the NON-GET column for the row that represents the self link. This button is shown in Figure 13.18. Clicking this button opens a dialog enabling you to construct a JSON payload that will be posted to the endpoint. Figure 13.19 shows correctly formatted JSON being entered into this dialog containing the details of a new follower. Once you’ve entered some JSON, click the OK button, and your JSON is posted.

images

FIGURE 13.18 The NON-GET button in the HAL browser.

images

FIGURE 13.19 Constructing JSON on the NON-GET dialog in the HAL browser.

Providing you got a 200 response back from posting the JSON, you can now view the event using the Event Store’s admin UI. After navigating to http://localhost:2113/ and choosing the Streams menu item, you should see a stream called BeganFollowing that was created when you posted from the HAL browser. You can click on its name to view events in that stream. From there, you can inspect individual events, as shown in Figure 13.20.

images

FIGURE 13.20 Viewing an event in the Event Store.

Publishing Events to an Atom Feed

Atom is a discerning choice for exposing events in many RESTful systems because it is an extremely common format, as discussed earlier in the chapter. In this example, you will see how to use the tools baked into the .NET framework for creating and publishing an Atom feed.

Applications that publish events as an Atom feed are akin to message-publishing components in a messaging system. Accordingly, for an event-driven REST system, you can use a similar naming convention that communicates domain concepts, such as {BoundedContext}.{BusinessComponent}.{Component}.

To create the component that publishes the Began Following domain event, you can start by adding a new ASP.NET Web Application to the project called AccountManagement.RegularAccounts.BeganFollowing. You need to configure this application to run on port 4102 and make it a start-up project.

Creating a Basic Atom Feed in .NET

To create an Atom feed using official libraries that are part of the .NET work framework, you first need to add a reference to System.ServiceModel in the new AccountManagement.RegularAccounts.BeganFollowing project. After setting up a route definition, shown in Listing 13-21, you can then use classes from System.ServiceModel to create an Atom feed using events retrieved from the Event Store, as shown in Listing 13-22. The code in Listing 13-22 needs to be added as a new controller in the Controllers folder.


As you can see in Listing 13-22, an Atom feed is created using the SyndicationFeed class. The created feed is then set as the response of the HTTP request via an XmlWriter. On the response object, application/atom+xml is set as the content type. This will be passed directly as the value for the HTTP Content-Type response header. You can also see that individual events, retrieved from the Event Store (EventRetriever.RecentEvents()), are converted into FeedItems. But what you can’t see in Listing 13-22 is how to retrieve the events from the Event Store. That is shown next.

Retrieving Events from the Event Store

In Listing 13-22, individual feed items are generated by retrieving events from the Event Store using a custom utility class: EventRetriever. The contents of EventRetriever are shown in Listing 13-23 and need to be added to your project. To make life easy, you can pop it in the bottom of the file containing the BeganFollowingController if you don’t want to create another file.

EventRetriever is a utility class that wraps the Event Store C# client. It is hard-coded to retrieve the past 20 events, starting from the most recent. This is enabled by using ReadStreamEventsForward, which starts with the most recent events and works backward. There’s a lot of functionality provided by the Event Store C# client that isn’t covered in this book, so if you’re thinking about using the Event Store and the C# client, the Event Store website contains lots of useful information.

For the EventRetriever to compile, you need to add a reference to the Event Store C# client in this project as well. The following command takes care of installing it for you:


Install-Package EventStore.Client -Project AccountManagement.RegularAccounts.BeganFollowing

To test that your Atom feed is working as expected, you first need to update the entry point resource (in the AccountManagement.EntryPoint.Api project) to provide a link to the Atom feed (remember, clients should not be coupled to resources, only the entry point). Listing 13-24 shows the updated entry point resource containing the required link. Once the resource is added to your project, you can test the feed by viewing it directly in a browser (accessing a resource directly is okay if you’re just testing it): http://localhost:4102/accountmanagement/beganfollowing.

Archiving Feeds

In a highly scalable system with potentially millions of users, there may be hundreds or thousands of events every second. Having a single Atom feed for all these events would quickly become unusable. This could result in a massive waste of network bandwidth, as well as other inefficiency-related issues. A common solution is to display a fixed number of events per-feed, and once a capacity is reached to then archive the feed. Importantly, each feed contains hypermedia links to the previous and next archives (if they exist). For more information, the Internet Engineering Task Force (IETF) has a request for comments (RFC) titled “Feed Paging and Archiving” (https://tools.ietf.org/html/rfc5005).

Creating an Event Subscriber/Atom Feed Consumer

Consuming an Atom feed that exposes domain events is akin to subscribing to messages in a messaging system. However, consuming an Atom feed inverts the process of receiving pushed messages by polling and pulling them instead. It’s a little more work up-front for developers, but it definitely has compelling advantages.

When creating a consumer of an Atom feed, you can again take advantage of Atom’s popularity by using official libraries in the .NET framework. You’ll see this shortly as you build the first part of the Discovery bounded context that polls the Began Following Atom feed. The project you create for this does not need to be a web project. Instead, you can create a new C# Class Library called Discovery.Recommendations.Followers (as per the containers diagram in Figure 13.10). As before, to take advantage of .NET’s really simple syndication (RSS) libraries, you need to add a reference to System.ServiceModel. You also need to configure this project as a start-up project.

Subscribing to Events by Polling

Inside the new polling component, the logic you are about to add consists of a few generic steps. These steps are likely to be similar in any Atom-feed polling application you build.

  1. Fetch a batch of events starting from the last event ID that was processed (or the first item if no event has been processed yet).
  2. Process each item in the batch according to domain policies.
  3. Store the ID of the last event processed.

The first part of implementing the polling consumer for the Discovery bounded context is shown in Listing 13-25. This contains only the high-level logic. You can add all this code to your project inside a single class called BeganFollowingPollingFeedConsumer in the root of the project.

Listing 13-25 shows the first part of the feed consumer. It illustrates how a batch of events will be retrieved from the feed and processed. You can see polling is set to a maximum of once per second with the call to Thread.Sleep().

Focus now shifts to the lower-level details of actually fetching the feed. This is shown in Listing 13-26; it’s an example of a REST API client following links in hypermedia from an entry point to a target resource. You need to add this code directly below the code you added from Listing 13-25. You also need to add ServiceStack.Text to the project by running the following command:


Install-Package ServiceStack.Text -Project Discovery.Reccommendations.Followers

The code in Listing 13-26 also depends on the classes in Listing 13-27 and the following using statements, which need to be added in the same file:


using ServiceStack.Text;
using System.Xml.Linq;

After fetching the feed, individual events can then be processed. A demonstrative implementation of this for the BeganFollowingPollingFeedConsumer is shown in Listing 13-28.

Listing 13-28 demonstrates the generic high-level logic you would likely see in a feed consumer. First you fetch a batch of events from the feed, then you select the ones that have not yet been processed. In some cases where feeds are paged or archived, you may need to make additional requests, again using hypermedia, to locate the last event processed. After locating the events that are unprocessed, you then process each one according to your domain rules and update the ID of the last processed event.

You may be wondering how errors are handled during the processing of events. With REST-based integration, there is no out-of-the-box support for poison messages or transitive messages. This is covered in a touch more detail toward the end of the chapter.

To complete the example, you need to implement the remaining piece of lower-level logic, which parses events from the feed. This is shown in Listings 13-29 and 13-30. You also require a final pair of using statements:


using System.IO;
using System.Xml;

That wraps up the example for this chapter. Hopefully you’ve understood enough theory and seen enough examples to feel confident about considering REST as an option for bounded context integration on your projects.

All that remains for this example is to test that everything works. You can do that by POSTing new followers, as shown previously. Keep an eye on the console window that automatically pops up. You should see output similar to Figure 13.21. You can also access the Atom feed directly in the browser again to check the new events that appear on it.

images

FIGURE 13.21 Feed consumer processing events

Maintaining REST Applications

As with a messaging or any other system, you have to support the application after it has been initially deployed. This may involve versioning APIs as they evolve, monitoring how the system is performing, or capturing metrics that are used to inform business decisions.

Versioning

Small improvements to APIs can easily be achieved without breaking any existing clients. The key is to make sure changes are backward compatible. If you had an application that produced the Shipping Status resource:


{
    "totalLegs": 5,
    "legsCompleted": 3,
    "currentVesselId":"sst399",
    "nextVesselId":"u223a"
}

and wanted to add a new piece of information to it, you need only add the extra piece of information at the bottom like this:


{
    "totalLegs": 5,
    "legsCompleted": 3,
    "currentVesselId":"sst399",
    "nextVesselId":"u223a",
    "eta":"2014-09-01"
}

This is a backward-compatible change and is desirable because clients coupled to the old format do not break.

API overhauls are a more contentious topic. These occur when you want to make big or breaking changes to an API. You may want to remove resources, move information between resources, or completely change formats. The two most common versioning options are to include the version in the URI or in an HTTP header. Versioning a URI usually involves a prefix such as /v2/accountmanagement/. Alternatively, versioning with a header may involve using the HTTP Version header like this: Version: 2.

Monitoring and Metrics

A big benefit of using HTTP is that there are a lot of off-the-shelf monitoring tools you can simply plug in to your APIs to immediately have a whole host of metrics. New relic (http://newrelic.com/) is a popular choice but it is not free. Instead, or in combination, you may want to capture custom metrics. In such cases, tools like StatsD (https://github.com/etsy/statsd/) and the C# StatsD client (https://github.com/goncalopereira/statsd-csharp-client) are popular options.

Drawbacks with REST for Bounded Context Integration

You’ve probably started to form your own opinions now, but REST is definitely an option for teams that want to build scalable, fault-tolerant systems without being coupled to messaging frameworks. Before you decide that REST is the right choice, though, it’s important to discern a few of its drawbacks.

A number of REST’s drawbacks when compared to messaging systems involve more development work up front. It can be a bigger initial effort to build scalable, fault-tolerant systems with REST. Most of the additional development work is to compensate for features that come out of the box with messaging solutions. But as you read through the list of drawbacks, keep in mind that over the lifetime of the project, the drawbacks may turn into advantages. You’ll have fewer frameworks to manage, and you will be closer to the metal when it comes to understanding how your distributed system actually communicates.

Less Fault Tolerance Out of the Box

Event-driven REST improves fault tolerance compared to RPC but lacks a little compared to messaging solutions. In the previous chapter, intent was captured to place an order and immediately stored. Any failures delivering the PlaceOrder command would simply result in the message being retried. That’s not true for the REST system you built in this chapter. If the Event Store was unavailable during an attempt to store a Began Following event, there is no automatic recovery when the Event Store comes back online.

One option to improve fault tolerance is to add store-and-forward mechanisms yourself. This could involve adding queues in locations where fault tolerance is important to the business. Alternatively, you could try a high-availability approach by adding more instances of an application behind a load balancer or to a cluster. The Event Store supports clustering, so that’s definitely a viable option for the example in this chapter.

To summarize, you have to work a bit harder to gain some of the fault tolerance benefits that messaging frameworks provide by default.

Eventual Consistency

Share-nothing, loosely coupled systems that communicate asynchronously are always going to have a high susceptibility to eventual consistency. Event-driven REST as recommended in this chapter definitely falls into that category. For example, when the Account Management bounded context exposes Began Following events, it has already stored them locally. But consumers who poll the feed don’t immediately get updated until they have polled the feed and processed the new events. So, depending on which API clients hit, they may or may not see information based on recent events.

Dealing with eventual consistency when integrating with REST relies on the same fundamental concepts as in a messaging system. You need to forego big transactions in favor of smaller ones. Also, you need to roll forward into new states. Finally, consider retrying messages a number of times in the hope that eventually they will succeed.

The Salient Points

  • HTTP’s popularity makes it a serious candidate for integrating bounded contexts.
  • Among its many benefits, HTTP leaves you completely free of any technology couplings, allowing you to mix and match technologies as you prefer.
  • Using HTTP means you may be able to use the same set of APIs internally and externally.
  • You can use HTTP in a number of ways; you can use it for RPC or event-driven REST.
  • RPC can be a good choice for simple solutions, whereas event-driven REST can lead to better fault tolerance and scalability.
  • With RPC over HTTP, you can use feature-rich but verbose SOAP or lightweight XML or JSON.
  • REST is fundamentally about resources, hypermedia, and statelessness based on how the web works.
  • REST takes full advantage of HTTP’s conventions and capabilities.
  • Domain events can be used as the messages in event-driven REST systems.
  • Asynchronous polling of Atom feeds that contain lists of events provides the basis for event-driven REST applications.
  • You can still use SOA’s principles with REST to build loosely coupled systems and loosely coupled teams.
  • HTTP requests and responses are the contract between bounded contexts. Try to avoid breaking changes, and aim for backward compatibility to avoid disrupting other teams.
  • Whichever form of HTTP you use, take every sensible opportunity to make domain concepts explicit.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.219.239.118