Chapter 7. Routing Considerations

This chapter looks at Cloud Foundry’s routing mechanisms in more detail. User-facing apps need to be accessed by a URL, often referred to in Cloud Foundry parlance as a route. End users target the URL for the app that they want to access. The app then hopefully returns the correct response. However, there is often a lot more going on behind that simple request–response behavior.

Operators can use routing mechanisms for reasons such as to provide additional security, ease deployment across a microservices architecture, and avoid downtime during upgrades through well-established techniques such as deploying canaries and establishing blue/green deployments. For these reasons, an understanding of Cloud Foundry’s routing mechanisms along with an appreciation of the routing capabilities is an important operational concern. Additionally, understanding how different Cloud Foundry components dynamically handle routing is important for debugging platform- or app-routing issues.

Routing Primitives

The Cloud Foundry operator deals with the following:

  • Routes

  • Hostnames

  • Domains

  • Context paths

  • Ports

The Cloud Foundry documentation explores these concepts at length. This chapter explores the key considerations for establishing routing best practices. We begin with a brief introduction to the terms and then move on to the routing mechanisms and capabilities.

Routes

To enable traffic from external clients, apps require a specific URL, known as a route. For example, developers can create a route by mapping the route myapp.shared-cf-domain.com to the app myapp.

You can construct routes via a combination of the following:

  • Domain

  • Host

  • Port

  • Context path

Route construction is explained further in a later section.

Each Cloud Foundry instance can have a single default domain and further additional domains that can be shared across organizations (Orgs) in a single Cloud Foundry instance. Routes are then based on those domains. Routes belong to a Space, and only apps in the same Space as that route can be mapped to it. A developer of one Space cannot create or use a route if it already exists in another Space. For this reason, many developers place app-name-${random-word} in their route to ensure that their app route is unique during the dev/test phase.

One app, one route, multiple app instances

You can map an individual app to either a single route or, if desired, multiple routes. Because apps can have multiple app instances (ActualLRPs), all accessed by the single route, each route has an associated array of host:port entries stored in a routing table on the GoRouter. Figure 7-1 shows that the host is the Diego Cell machine running the LRP in a container and the port corresponds to a dedicated host port for that container. The router regularly recomputes new routing tables based on the Cell IP addresses and the host-side port numbers for the containers, as illustrated in Figure 7-1. In a cloud-based distributed environment, both desired and actual state can rapidly change; thus, it is important to dynamically update routes both periodically and immediately in response to state changes.

Route Cell to Container Port Mapping
Figure 7-1. Cell-to-container port mapping

One app, multiple routes

You can also map an individual app to multiple routes, granting multiple URLs access to that app. This capability is illustrated in Figure 7-2.

Route Cell to Container Port Mapping
Figure 7-2. One app mapped to two different routes

Mapping more than one route to an app can be a valuable feature for establishing techniques such as blue/green deployments. You can read more about blue/green deployments in the Cloud Foundry Documentation.

Several apps, one route

In addition to being able to map all identical app instances to a single route, as depicted in Figure 7-3, you can also map independent apps to a single route. This results in the GoRouter load-balancing requests for the route across all instances of all mapped apps, as demonstrated in Figure 7-3. This feature is also important for enabling the use of blue/green and canary deployment strategies. It is also used when dealing with different apps that must work collectively with a single entry point; for example, microservices architecture (discussed shortly).

Route Cell to Container Port Mapping
Figure 7-3. Routing mechanism allowing for several apps mapped to the same route

Hostnames

Cloud Foundry provides the option of creating a route with a hostname. A hostname is a name that can be explicitly used to specify an app, as shown in the following code:

$ cf create-route my-space shared-domain.com --hostname myapp

This creates the unique route myapp.shared-domain.com comprising the host that is prepended on to the shared domain.

At this stage, all we have done is reserved the route so that the route is not used in another Space. The app is only routable by this route after it is mapped to the route, as in the following:

$ cf mapp-route app-name domain hostname

Note that although this route is created for the Space my-space, the Space is not featured in the route name.

Routes created for shared domains must always use a hostname. Alternatively, you can create a route without a hostname. This approach creates a route for the domain itself and is permitted for private domains only. You can create private domains as follows:

$ cf create-route my-space private-domain.com

This example creates a route in the Space my-space from domain private-domain.com. After configuring your DNS, Cloud Foundry will route requests for http(s)://private-domain.com or any context path under that URL (e.g., private-domain.com/app1) to apps that are mapped to that route or context path. Any subdomain (e.g., *foo*.private-domain.com) will fail unless additional routes are specified for that subdomain and then mapped to a subsequent app.

You can use wildcard routes here as a catch-all (e.g., *.private-domain.com); for example, to serve a custom 404 page or a specific homepage.

Domains

Cloud Foundry’s use of the terms domain, shared domain, and private domain differ from their common use:

  • Domains provide a namespace from which to create routes.

  • Shared domains are available to users in all Orgs, and every Cloud Foundry instance requires a single default shared domain.

  • Private domains allow users to create routes for privately registered domain names.

As discussed in “Hostnames”, by default apps are assigned a route with a hostname my-app, and the app domain my-app.apps.cf-domain.com, resulting in the route my-app.apps.cf-domain.com.

The presence of a domain in Cloud Foundry indicates that requests for any route created from that domain will be routed to a specific Cloud Foundry instance. This provision requires a DNS to be configured to resolve the domain name to the IP address of a load balancer that fronts traffic entering Cloud Foundry.1

The recommended practice is to have a wildcard canonical name (CNAME) that you can use as a base domain for other subdomains. An example of a wildcard CNAME is *.cf-domain.com.

To use a subdomain of your registered domain name with apps on Cloud Foundry, configure the subdomain as a CNAME record with your DNS provider, pointing at any shared domain offered in Cloud Foundry.

When installing Cloud Foundry, it is good practice to have a subsequent system domain and one or more app domains; for example, system.cf-domain.com for your system domain, and apps.cf-domain.com for (one of) your app domain(s). Multiple app domains can be advantageous and are discussed further later.

The system domain allows Cloud Foundry to receive requests for and send communication between its internal components (like the UAA and Cloud Controller). Cloud Foundry itself can run some of its components as apps. For example, a service broker can deploy an app. The app domain guarantees that requests for routes based off that domain will go to a specific Cloud Foundry instance.

If we had only one combined system and app domain, there would be no separation of concerns. A developer could register an app domain name that infers it is a potential system component. This can cause confusion to the Cloud Foundry operator with respect to what are system apps and what are developer-deployed apps. Moreover, mapping an arbitrary app to a system component can cause fundamental system failures, as we will explore in “Scenario Five: Route Collision”. For these reasons, it is recommended that you always have at least one default system domain and default app domain per environment.

All system components should register routes that are extensions of the system domain; for example, login.system.cf-domain.com, uaa.system.cf-domain.com, doppler.system.cf-domain.com, and api.system.cf-domain.com.

Multiple app domains

There are some advantages to using multiple app domains. For example, an operator might want to establish a dedicated app domain with a dedicated cert and VIP. If you issue a certificate for a critical app on a dedicated app domain, and for some reason that certificate becomes compromised, you have the flexibility of revoking just that certificate without affecting all of your other apps that are on a different app domain.

Context Path Routing

Context path routing allows for routing to be based not only on the route domain name (essentially the host header), but also the path specified in the route’s URL. The GoRouter inspects the URL for additional context paths and, upon discovery, can then route requests to different apps based on that path. Here are a couple of examples:

  • myapp.mycf-domain.com/foo can be mapped to the foo app.

  • myapp.mycf-domain.com/bar can be mapped to the bar app.

This is important when dealing with a microservices architecture. With microservices, a single “big-A” apps can be comprised of a suite of smaller microservices apps, as shown in Figure 7-4. The smaller applications often require the same single top-level route myapp.mycf-domain.com to offer a single entry point for the user. Context path routing allows different microservices apps (e.g., foo and bar), all served by the same parent route, to provide support for different paths in the URL, based on their unique context path.

With context path-based routing, you can also independently scale up or down those portions of your big-A app that are being heavily utilized.

Route Cell to Container Port Mapping
Figure 7-4. Routing using a single route and context paths to target a specific app

Routing Components Overview

There are several components involved in the flow of ingress Cloud Foundry traffic. We can broadly group these as follows:

  • Routing tier (load balancer, GoRouter, TCPRouter)

  • The control plain and user management (Cloud Controller and UAA)

  • The app components (Cells and the SSH proxy)

Figure 7-5 provides a high-level view of the components.

IMG-route-mapping-workflow
Figure 7-5. Routing components and communication flow

Let’s take a closer look at each of these components:

Load balancer

All HTTP-based traffic first enters Cloud Foundry from an external load balancer fronting Cloud Foundry. The load balancer is primarily used for traffic routing to Cloud Foundry routers.

GoRouter

The GoRouter receives all incoming HTTP(s) traffic from the load balancer. The GoRouter also receives WebSocket requests and performs the HTTP-to-WebSocket upgrade to establish a consistent TCP connection to the backend.

TCPRouter

The TCPRouter receives all incoming (non-HTTP) TCP traffic from the load balancer.

Cloud Controller and the UAA

Operators address the Cloud Controller through Cloud Foundry’s API. As part of this flow, identity management is provided by the UAA.

Cells and SSH_Proxy

App users target their desired apps via a dedicated hostname and/or domain combination. The GoRouter will route app traffic to the appropriate app instance (ActualLRP) running on a Diego Cell. If multiple app instances are running, the GoRouter will round-robin traffic across the app instances to distribute the workload. App users can use SSH to the app’s container running on a host Cell via the SSH proxy service.

Routing Flow

All traffic enters Cloud Foundry from an external load balancer. The load balancer routes the traffic as follows:

  • HTTP/HTTPS and WebSocket traffic to the GoRouter

  • (Non-HTTP) TCP traffic to the TCPRouter

App traffic is routed from the routers to the required app. If you’re running multiple app instances (ActualLRPs), the routers will load-balance the traffic across the running ActualLRPs. If an app requires the user to authenticate, you can redirect requests to the UAA’s login server. Upon authentication, the user is passed back to the original app. The Cloud Controller provides an example of this behavior.

Platform users target the Cloud Controller. Requests come in from the load balancer through the GoRouter and hit the CAPI. If the user has yet to log in, requests are redirected to the UAA for authentication. Upon authentication, the user is redirected back to the Cloud Controller.

Route-Mapping Flow

When you create a route in the routing table (either directly via Cloud Foundry map-route command or indirectly via cf push), the Route-Emitter is listening to events in Diego’s BBS and notices all newly created routes. It takes the route-mapping info (of Cell host:port) and then dynamically updates the route mapping in the routing table. Any additional changes—for example, a deleted or moved app—will also result in the emitter updating the routing table. We discuss route mapping and the Route-Emitter further in “Routing Table”.

Load Balancer Considerations

Although the choice of load balancer is yours to make, there are some specific considerations required:

  • Setting the correct request header fields.

  • Determining where to terminate SSL.

  • Configuring the load balancer to handle HTTP upgrades to WebSockets (assuming these requests are then being passed on to the TCPRouter). Ideally, you should avoid this with the WebSocket upgrade; you should instead use the GoRouter.

Setting Request Header Fields

When a client connects to a web server through an HTTP proxy or load balancer, it is possible to identify the originating IP address and the send protocol by setting the X-Forwarded-For and X-Forwarded-Proto request header fields, respectively. These headers must be set on the load balancer that fronts the traffic coming into Cloud Foundry. HTTP traffic passed from the GoRouter to an app will include these headers. If an app wants to behave differently based on the transport protocol used, it can inspect the headers to determine whether traffic was received over HTTP or HTTPS.

X-Forwarded-For

X-Forwarded-For (XFF) provides the IP address of the originating client request. For example, an XFF request header for a client with an IP address of 203.0.56.67 would be as follows:

X-Forwarded-For: 203.0.56.67

If you did not use XFF, connections through the router would reveal only the originating IP address of the router itself, effectively turning the router into an anonymizing service and making the detection and prevention of abusive access significantly more difficult. The usefulness of XFF depends on the GoRouter truthfully reporting the original host IP address. If your load balancer terminates TLS upstream from the GoRouter, it must append these headers to the requests forwarded onto the GoRouter.

X-Forwarded-Proto

X-Forwarded-Proto (XFP) identifies the client protocol (HTTP or HTTPS) used from the client to connect to the load balancer. The scheme is HTTP if the client made an insecure request, or HTTPS if the client made a secure request. For example, an XFP for a request that originated from the client as an HTTPS request would be as follows:

X-Forwarded-Proto: https

As with most client-server architectures, the GoRouter access logs contain only the protocol used between the GoRouter and the load balancer; they do not contain the protocol information used between the client and the load balancer. XFP allows the router to determine this information. The load balancer stores the protocol used between the client and the load balancer in the XFP request header and passes the header along to the router.

XFP is important because you can configure apps to reject insecure requests by inspecting the header for the HTTP scheme. This header is as important, or even more so, for system components than for apps. The UAA, for example, will reject all login attempts if this header is not set.

WebSocket Upgrades

WebSockets is a protocol providing bidirectional communication over a single, long-lived TCP connection. It is commonly implemented by web clients and servers. WebSockets are initiated via HTTP as an upgrade request. The GoRouter supports WebSocket upgrades, holding the TCP connection open with the selected app instance.

Supporting WebSockets is important because the Firehose (the endpoint of all aggregated and streamed app logs) is a WebSockets endpoint that streams all event data originating from a Cloud Foundry deployment. To support WebSockets, operators must configure their load balancer to pass WebSockets requests through as opaque TCP connections. WebSockets are also vital for app log streaming, allowing developers to view their app logs.

Some load balancers are unable to support listening for both HTTP and TCP traffic on the same port. Take, for example, ELB offered by AWS. ELB can listen on a port in either HTTP(s) mode or TCP mode. To pass through a WebSocket request, ELB must be in TCP mode. However, if ELB is terminating TLS requests on 443 and appending the XFF and XFP headers, it must be in HTTP mode. Therefore, ELB cannot handle WebSockets on the same port. In this scenario, you can do the following:

  • Configure your load balancer to listen for WebSocket requests on a nonstandard port (e.g., 8443) and then forward WebSocket requests to this port in TCP mode to the GoRouter on port 80 or 443. App clients must make WebSockets upgrade requests to this port.

  • Add a second load balancer listening in TCP mode on standard port 80. Configure DNS with a dedicated hostname for use with WebSockets that resolves to the new load balancer serving port 80 in TCP mode.

The PROXY Protocol

As just described, WebSockets require a TCP connection; however, when using TCP mode, load balancers will not add the XFF HTTP protocol headers, so you cannot identify your clients. Another solution for client identification is to use the Proxy protocol. This protocol allows your load balancer to add the Proxy protocol header so that your apps can still identify your clients even when you use TCP mode at the load balancer.

Another scenario is to terminate TLS with a component that does not support HTTP and operates only in TCP mode. Therefore, an HTTP connection is then passed on to the GoRouter.

A point to note is that some load balancers in TCP mode will not give you HTTP multiplexing and pipelining. This could cause a performance problem unless you have a content delivery network (CDN) in front.

TLS Termination and IPSec

Although Cloud Foundry is a distributed system, conceptually we can consider it as a software appliance. It is designed to sit in a dedicated network with defined egress and ingress firewall rules. For this reason, if the load balancer sits within or on the edge of the private network, it can handle TLS decryption and then route traffic unencrypted to the GoRouter.

However, if the load balancer is not dedicated to Cloud Foundry and it is located on a general-purpose corporate network, it is possible to pass the TLS connection on to the GoRouter for decryption. To implement this, you must configure your load balancer to re-sign the request between the load balancer and the GoRouter using your wildcard certificate. You will also need to configure the GoRouter with your Cloud Foundry certificates.

There might be situations in which you require encryption directly back to the app and data layer. For these scenarios, you can use the additional IPSec BOSH add-on that provides encrypted traffic between every component machine.

GoRouter Considerations

The GoRouter serves HTTP(S) traffic only. HTTP(S) connections to apps from the outside world are accepted only on ports 80 or 443. (Protocol upgrade requests for WebSockets are also acceptable.)

All router logic is contained in a single process. This approach removes unnecessary latency introduced through interprocess communication. Additionally, with full control over every client connection, the router can more easily allow for connection upgrades to WebSockets and other types of traffic (e.g., HTTP tunneling and proxying via HTTP CONNECT).

Routing Table

The router uses a routing table to keep track of available apps. This table contains an up-to-date list of all the routes to the Cells and containers that are currently running ActualLRPs. As described earlier, you can map multiple routes to an app and map multiple apps to a route. The routing table keeps track of this mapping. It provides the source of truth for all routing, dynamically checking for and pruning dead routes to avoid 404 errors.

Diego uses its Route-Emitter component to consume event streams from the Diego Database—the BBS—and then pushes the route updates to the router. Additionally, the Route-Emitter performs a bulk lookup operation against its database every 20 seconds to fetch all the desired and actual routes.

The GoRouter then recomputes a new routing table based on the IP addresses of each Cell machine and the host-side port numbers for the Cell’s containers. This ensures that the routing table information is up to date in the event that an app fails.

Router and Route High Availability

GoRouters should be clustered both for resiliency and for handling a large number of concurrent client connections.

When GoRouters come on line, they send router.start messages informing Route-Emitters that they are running. Route-Emitters are monitoring desired and actual LRP events in the BBS to establish and map routes to app instances. They compute the routing table and send this table to the GoRouter via NATS at regular intervals.

This ensures that new GoRouters update their routing table and synchronize with existing GoRouters. Routes will be pruned from the routing table if an app connection goes stale. To maintain an active route, the route must be updated by default at least every two minutes.

An important implementation consideration for the GoRouter is that because it uses NATS, it must be brought online after the NATS component in order to function properly.

Router Instrumentation and Logging

Like the other Cloud Foundry components, the GoRouter provides logging through its Metron agent. In addition, a /routes endpoint returns the entire routing table as JSON. Because of the nature of the data present in /routes, the endpoint requires HTTP basic authentication credentials served on port 8080. These credentials are obtained from the deployment manifest under the router job:

    status:
      password: some_password
      port: 8080
      user: some_user

The credentials can also be obtained from the GoRouter VM at /var/vcap/jobs/gorouter/config/gorouter.yml.

Each route contains an associated array of host:port entries, which is useful for debugging:

$ curl -vvv "http://some_user:[email protected]:8080/routes"

In addition to the routing table endpoint, the GoRouter offers a healthcheck endpoint on /health:

$ curl -v "http://10.0.32.15:8080/health"

This is particularly useful when performing healthchecks from a load balancer. This endpoint does not require credentials and should be accessed at port 8080. Because load balancers typically round-robin the GoRouters, by regularly checking the GoRouter health, they can avoid sending traffic to GoRouters that are temporarily not responding.

You can configure the GoRouter logging levels in the Cloud Foundry deployment manifest. The meanings of the router’s log levels are as follows:

fatal

An error has occurred that makes the current request unserviceable; for example, the router cannot bind to its TCP port, or a Cloud Foundry component has published invalid data to the GoRouter.

warn

An unexpected state has occurred. For example, the GoRouter tried to publish data that could not be encoded as JSON.

info, debug

An expected event has occurred. For example, a new Cloud Foundry component was registered with the GoRouter, and the GoRouter has begun to prune routes for stale containers.

Sticky Sessions

For compatible apps, the GoRouter supports sticky sessions (aka session affinity) for incoming HTTP requests.

When multiple app instances are running, sticky sessions will cause requests from a particular client to always reach the same app instance. This makes it possible for apps to store session data specific to a user session. Generally, this approach is not good practice; however, for some select pieces of data such as discrete and lightweight user information, it can be a pragmatic approach.

Sticky Sessions

A single app can have several instances running concurrently. Functional use of the local filesystem is limited to local caching because filesystems provided to apps are ephemeral unless you use a filesystem service. By default, changes to the filesystem are not preserved between app restarts, nor are they synchronized or shared between multiple app instances.

This means Cloud Foundry does not natively maintain or replicate HTTP session data across app instances, and all cached session data will be discarded if the app instance hosting the sticky session is terminated. If you require session data to be saved, it must be offloaded to a backing service that offers data persistence.

To support sticky sessions, apps must return a JSESSIONID cookie in their responses.

If an app returns a JSESSIONID cookie to a client request, the GoRouter appends an additional VCAP_ID cookie to the response, which contains a unique identifier for the app instance. On subsequent client requests, the client provides both the JSESSIONID and VCAP_ID cookies, allowing the GoRouter to forward client requests back to the same app instance.

If the app instance identified by the VCAP_ID is no longer available, the GoRouter attempts to route the request to a different instance of the app. If the GoRouter finds a healthy instance of the app, it initiates a new sticky session.

The TCPRouter

Support for non-HTTP workloads on Cloud Foundry is provided by the TCPRouter. The TCPRouter allows operators to offer TCP routes to app developers based on reservable ports.

When pushing an app mapped to a TCP route:

$ cf p myapp -d tcp.mycf-domain.com --random-route

the response from the Cloud Controller includes a port associated with the TCP route. Client requests to these ports will be routed to apps running on Cloud Foundry through a layer 4 protocol-agnostic routing tier.

Both HTTP and TCP routes will be directed to the same app port, identified by environment variable $PORT.

The developer experience for TCP routing is similar to previous routing-related workflows. For example, the developer begins by discovering a domain that supports TCP routing through cf domains. cf domains will show whether a specific domain has been enabled for TCP routing by setting up DNS for that domain to point to the load balancers, and load balancers then pointing to the TCPRouters.

The discovered domain gives a developer an indication that requests for routes created from that domain will be forwarded to apps mapped to that route. It also provides a namespace allowing operators to control access for one domain or another.

After you choose your domain, you can then create a route from that domain via the usual create route and map route commands. However, the cf push experience is streamlined because the appropriate route will be configured for that app simply by selecting a TCP domain. For example, to create a TCP route for the app myapp using the domain tcp-example-domain.com you can run the following:

$ cf push myapp -d tcp-example-domain.com --random-route

TCP routes are different from HTTP routes because they do not use hostnames; instead, routing decisions are based on a port. For each TCP route, we reserve a port on the TCPRouter. This requires clients of apps that receive TCP app traffic to support these arbitrary ports.

TCP Routing Management Plane

The TCP routing management plane (see Figure 7-5) has similar functionality to the HTTP routing management plane. There is a Route-Emitter listening to events in Diego’s BBS. For example, whenever a new app is created or an app is moved or scaled, the BBS is updated.

The emitter detects these events, constructs the routing table, and then sends this table on to the routing API. The routing API effectively replaces the need for NATS; it maintains the routing table and then makes the configuration available across a tier of TCPRouter instances. Therefore, TCPRouters receive their configuration from the routing API and not via NATS. Both the TCPRouter and the Route-Emitter receive their configuration from both periodic bulk fetches and real-time server sent events.

TCP routing introduces some complexity through additional NAT involving different ports at different tiers, as illustrated in Figure 7-6. There is a route port to which clients send requests. This port is reserved on the TCPRouter when you create a TCP route. Behind the scenes, the TCPRouter makes a translation between that route port and the app instances. Containers include app ports (that default to 8080). These ports are not directly accessible via the TCPRouter, because containers are running in a Cell providing an additional NAT for the container. Therefore, the ports made known to the TCPRouter are the Cell ports (the backend port).

img-tcprouter-port-mapping
Figure 7-6. TCPRouter port mappings from the load balancer through to the app

TCPRouter Configuration Steps

Here are the deployment steps required for configuring TCP routing:

  1. Choose a domain name from which developers will create their TCP routes.

  2. Configure DNS to point to that domain name via the load balancer.

  3. Choose how many TCP routes the platform should support based on the reserved ports on the TCPRouter.

  4. Configure the load balancer to listen on the port range2 and then forward requests for that port range and domain to the TCPRouters.

  5. Configure the TCPRouter group within Cloud Foundry with the same port range.

  6. Create a domain associated with that TCPRouter group.

  7. Configure a quota to entitle Orgs to create TCP routes.

Route Services

Apps often have additional requirements over and above traditional middleware components, such as databases, caches, and message buses. The additional app requirements include tasks such as authentication, or might require a special firewall or rate limiting. Traditionally, these burdens have been placed on the developer and app operator to build additional (nonbusiness) capabilities into the app or directly use and configure some other external capability such as an edge caching appliance.

The route services capability makes it possible for developers to select a specific route service from the marketplace (in a similar fashion to middleware services) and insert that service into the app request path. They offer a new point of integration and a new class of service.

As seen in Figure 7-7, route services give you the ability to dynamically insert a component (in the case of Figure 7-7, Apigee) into the network path as traffic flows to apps. Traditionally, developers had to file a ticket to get a new load balancer configuration for additional firewall settings, or IT had to manually insert additional network components for things like rate limiting. Route services now offer these additional capabilities dynamically via integration with the router.

Route Services
Figure 7-7. The Route Service showing the path of application user traffic to an app, accessing an API Proxy Service (in this case provided by Apigee)

Unlike middleware marketplace services that are bound to an app, route services are bound to a route for a specific app. New requests to that app can then be modified via a route service. Just like the middleware services in the marketplace, the route service might not necessarily be Cloud Foundry, or BOSH–deployed and managed. For example, a route service could be the following:

  • An app running on Cloud Foundry

  • A BOSH-, Puppet-, or Chef-deployed component

  • Some other external enterprise service provided by a third party such as Apigee

Route Service Workflow

All requests arrive (1) via the external load balancer, which passes traffic on to the GoRouter (2). The GoRouter checks for a bound route service for a route, and if no service exists, it will simply pass traffic on to the appropriate LRP (3). If a route service does exist for the route, the router will then pass that traffic on to the service that is bound to the route.

Before passing the request on to the service, the router generates an encrypted short-lived message to include both the requested route and the route service GUID. The router then appends this message to the request header and forwards the request to the bound route service. After the specific route service has undertaken its work (e.g., header modification or rate limiting) the service can do one of two things:

  • Respond directly to the request (e.g., serve an access-denied message if acting as an app firewall)

  • Pass the traffic to the app

The route service passes traffic to the app (4) by resolving via DNS back to the load balancer (5), then to the router (6), and then to the app. The response traffic then follows that same flow backward to return a response to the client (7/8/9/10/11/12). This allows the service to do further modification on the returned response body if required. Figure 7-8 provides an overview of the architecture.

Route Services
Figure 7-8. Route service workflow showing the redirection of app traffic to the route service before being directed back to the app via the load balancer (note that the return flow retraces the same path in reverse)

Route Service Use Cases

You can consider any use cases that can be on the request path as eligible for a route service. Here are some examples:

  • Gateway use cases such as rate limiting, metering, and caching

  • Security use cases such as authentication, authorization, auditing, fraud detection, and network sniffing

  • Analytics use cases such as monetization, chargeback, and utilization

  • Mobile backend as a service (MBaaS) such as push notifications and data services

In line with the rest of Cloud Foundry, the key goal of routing services is to increase app velocity. Without the developer self-service that route services provide, most organizations are left with the pain of ticketing systems and extra configuration to achieve these use-case capabilities.

Summary

The core premise of Cloud Foundry is to allow apps to be deployed with velocity and operated with ease. The routing abstractions and mechanisms within Cloud Foundry have been designed and implemented to support that premise:

  • Routing is an integral part of deploying and operating apps.

  • Cloud Foundry provides a rich set of abstractions and mechanisms for supporting fast deployment, rolling upgrades, and other complicated routing requirements.

  • Establishing the most appropriate routing architecture is essential for app security, resiliency, and updatability.

  • With the introduction of the TCPRouter and additional route services, the platform can take on more diverse workloads with broader, more granular routing requirements.

1 The enterprise load balancer you use is your choice. Cloud Foundry is not opinionated about the load-balancing strategy that fronts it.

2 Make sure your port range accounts for sufficient capacity because every TCP connection will require a dedicated port from your reserved port range.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.145.35.247