Determining connection costs and limits

Excessive database connections are not without risk. The level of risk we incur and what exactly qualifies as excessive are important to determine early. The company and our customers will find it extremely inconvenient if normal database activity exhausted system memory, caused timeouts due to increased context-switching, or overwhelmed the kernel with an overly large process table.

To maintain a highly available server, we must know the full impact of every single connection in terms of required memory and CPU resources. Servicing several disparate applications from various external servers is difficult, so we must provide availability while simultaneously avoiding resource exhaustion. If we properly assess the ideal balance between connection count and performance early on, we can avoid costly emergencies.

Irrespective of whether we helped specify the hardware that will host our PostgreSQL installation, it's still our job to figure out how many clients it can comfortably support. Since this chapter is primarily focused on database pools, we can use this opportunity to choose a practical pool size as well.

Getting ready

We will make a few rough calculations in this section. If possible, obtain data regarding the amount of CPU cores, available RAM, and the number of disk spindles in the storage pool.

Linux systems have a live filesystem that tracks most of this information. To obtain the number of CPUs, simply execute this at the command line, and add one to the highest value since indexing starts at zero:

grep ^processor /proc/cpuinfo

For the amount of RAM in kilobytes, use this command:

grep MemTotal /proc/meminfo

Finding the amount of disk spindles can vary greatly between RAID and SAN implementations, so we suggest you obtain the number from the infrastructure department.

How to do it...

Start by calculating the number of connections that the RAM can accommodate by following these steps:

  1. Begin the estimate with 8 MB used per connection.
  2. Add four times the value of the work_mem PostgreSQL configuration setting in megabytes, for a per-client total.
  3. Obtain the amount of RAM in megabytes.
  4. Divide half of the RAM size by the per-client MB total.

Next, calculate the number of connections the CPU and disk resources can support by following these steps:

  1. Obtain the CPU count in cores, including virtual if present.
  2. Double the CPU core count.
  3. Add the number of disk spindles.

Use the lower of the two values as the final ideal connection count.

How it works...

To know how much RAM a connection may use, we start with a baseline of eight megabytes. This accounts for library overhead, likelihood of using temporary table space, and other various allocations necessary for a session to function. To that, we add four times the work_mem setting used by the server to sort and query calculations.

Why four? Large and complex queries will use more, while short and simple queries will use less, so we start with something in the middle. It's actually possible that this multiplier is somewhat pessimistic, so it trends toward assuming higher memory use. That's fine, since overestimating in this case is safer than running out of memory in the presence of several simultaneous complex queries.

With this total, we can see how many connections will use half of the available RAM. We only use half of the system RAM here, since the database itself needs memory. In addition, queries are much faster when tables are available in the operating system page cache. If too much RAM is reserved for client use, query performance can suffer considerably.

In the next set of calculations, we start with the CPU total and double this amount. The more disk spindles available, the less time each CPU spends waiting for results. By adding the number of disks, we get an approximation of how many connections our CPUs can actually support without excessive idling.

By taking the lower of these two calculations, we account for whatever bottleneck will constrain system performance the most. This is our ideal connection count, and it works as a first approximation for the size of any connection pool we create.

There's more...

For an example of this in action, consider a system with 32 GB of RAM, eight CPU cores, and eight disk spindles. We used 8 MB for our work_mem setting, so this means we may need up to 40 MB per database connection. 16 GB of RAM can then safely support about 409 connections, assuming memory is our only resource limit.

Otherwise, our eight CPUs and eight disks can support up to 24 connections. This is quite a discrepancy! However, 24 is the safer of the two limits to prevent latency. If we find that a certain amount of latency is not overly disruptive, we can increase the connection count, but not higher than 400, otherwise we risk actually exhausting the available RAM.

Tip

Please keep in mind that the focus of this book is high availability at nearly all costs, and as such, our formulas are extremely pessimistic. We encourage experimentation with these values; you may find a better balance than what we suggest here.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.147.6.243