Scaling Effects

In biology, the square-cube law explains why we’ll never see elephant-sized spiders. The bug’s weight scales with volume, so it goes as O(n^3). The strength of the leg scales with the area of the cross section, so it goes as O(n^2). If you make the critter ten times as large, that makes the strength-to-weight ratio one-tenth of the small version, and the legs just can’t hold it up.

We run into such scaling effects all the time. Anytime you have a “many-to-one” or “many-to-few” relationship, you can be hit by scaling effects when one side increases. For instance, a database server that holds up just fine when ten machines call it might crash miserably when you add the next fifty machines.

In the development environment, every application runs on one machine. In QA, pretty much every application looks like one or two machines. When you get to production, though, some applications are really, really small, and some are medium, large, or humongous. Because the development and test environments rarely replicate production sizing, it can be hard to see where scaling effects will bite you.

Point-to-Point Communications

One of the worst places that scaling effects will bite you is with point-to-point communication. Point-to-point communication between machines probably works just fine when only one or two instances are communicating, as in the following figure.

images/stability_antipatterns/dev_test_env_p2p.png

 

With point-to-point connections, each instance has to talk directly to every other instance, as shown in the next figure.

images/stability_antipatterns/prod_env_p2p.png

The total number of connections goes up as the square of the number of instances. Scale that up to a hundred instances, and the O(n^2) scaling becomes quite painful. This is a multiplier effect driven by the number of application instances. Depending on the eventual size of your system, O(n^2) scaling might be fine. Either way, you should know about this effect before your system hits production.

Be sure to distinguish between point-to-point inside a service versus point-to-point between services. The usual pattern between services is fan-in from my farm of machines to a load balancer in front of your machines. This is a different case. Here we’re not talking about having every service call every other service.

Unfortunately, unless you are Microsoft or Google, it is unlikely you can build a test farm the same size as your production environment. This type of defect cannot be tested out; it must be designed out.

This is one of those times where there is no “best” choice, just a good choice for a particular set of circumstances. If the application will only ever have two servers, then point-to-point communication is perfectly fine. (As long as the communication is written so it won’t block when the other server dies!) As the number of servers grows, then a different communication strategy is needed. Depending on your infrastructure, you can replace point-to-point communication with the following:

  • UDP broadcasts
  • TCP or UDP multicast
  • Publish/subscribe messaging
  • Message queues

Broadcasts do the job but aren’t bandwidth-efficient. They also cause some additional load on servers that aren’t interested in the messages, since the servers’ NIC gets the broadcast and must notify the TCP/IP stack. Multicasts are more efficient, since they permit only the interested servers to receive the message. Publish/subscribe messaging is better still, since a server can pick up a message even if it wasn’t listening at the precise moment the message was sent. Of course, publish/subscribe messaging often brings in some serious infrastructure cost. This is a great time to apply the XP principle that says, “Do the simplest thing that will work.”

Shared Resources

Another scaling effect that can jeopardize stability is the “shared resource” effect. Commonly seen in the guise of a service-oriented architecture or “common services” project, the shared resource is some facility that all members of a horizontally scalable layer need to use. With some application servers, the shared resource will be a cluster manager or a lock manager. When the shared resource gets overloaded, it’ll become a bottleneck limiting capacity. The following figure should give you an idea of how the callers can put a hurting on the shared resource.

images/stability_antipatterns/many_to_one_comm.png

When the shared resource is redundant and nonexclusive—meaning it can service several of its consumers at once—then there’s no problem. If it saturates, you can add more, thus scaling the bottleneck.

The most scalable architecture is the shared-nothing architecture. Each server operates independently, without need for coordination or calls to any centralized services. In a shared nothing architecture, capacity scales more or less linearly with the number of servers.

The trouble with a shared-nothing architecture is that it might scale better at the cost of failover. For example, consider session failover. A user’s session resides in memory on an application server. When that server goes down, the next request from the user will be directed to another server. Obviously, we’d like that transition to be invisible to the user, so the user’s session should be loaded into the new application server. That requires some kind of coordination between the original application server and some other device. Perhaps the application server sends the user’s session to a session backup server after each page request. Maybe it serializes the session into a database table or shares its sessions with another designated application server. There are numerous strategies for session failover, but they all involve getting the user’s session off the original server. Most of the time, that implies some level of shared resources.

You can approximate a shared-nothing architecture by reducing the fan-in of shared resources, i.e., cutting down the number of servers calling on the shared resource. In the example of session failover, you could do this by designating pairs of application servers that each act as the failover server for the other.

Too often, though, the shared resource will be allocated for exclusive use while a client is processing some unit of work. In these cases, the probability of contention scales with the number of transactions processed by the layer and the number of clients in that layer. When the shared resource saturates, you get a connection backlog. When the backlog exceeds the listen queue, you get failed transactions. At that point, nearly anything can happen. It depends on what function the caller needs the shared resource to provide. Particularly in the case of cache managers (providing coherency for distributed caches), failed transactions lead to stale data or—worse—loss of data integrity.

Remember This

Examine production versus QA environments to spot Scaling Effects.

You get bitten by Scaling Effects when you move from small one-to-one development and test environments to full-sized production environments. Patterns that work fine in small environments or one-to-one environments might slow down or fail completely when you move to production sizes.

Watch out for point-to-point communication.

Point-to-point communication scales badly, since the number of connections increases as the square of the number of participants. Consider how large your system can grow while still using point-to-point connections—it might be sufficient. Once you’re dealing with tens of servers, you will probably need to replace it with some kind of one-to-many communication.

Watch out for shared resources.

Shared resources can be a bottleneck, a capacity constraint, and a threat to stability. If your system must use some sort of shared resource, stress-test it heavily. Also, be sure its clients will keep working if the shared resource gets slow or locks up.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.139.83.199