PuppetDB – PostgreSQL with PGTune

When in doubt about how to tune a PostgreSQL server, try PGTune. This project will read your current postgresql.conf and output a new one with tuning settings designed for the machine it's running on. As an important side note, this will not take into account the necessary memory for the message queue or the web server, so leaving a small amount of extra resources by slightly tuning down these settings can help with performance.

Please note that PGTune assumes the only purpose of the node it is running on is to serve a Postgres server. These settings will be difficult to use on a single monolithic master, and tkishel/pe_tune will be a much more useful tool for configuring these servers.

We'll want to begin by cloning and entering the current PGTune project:

git clone https://github.com/gregs1104/pgtune.git
Cloning into 'pgtune'...
remote: Counting objects: 112, done.
remote: Total 112 (delta 0), reused 0 (delta 0), pack-reused 112
Receiving objects: 100% (112/112), 66.21 KiB | 0 bytes/s, done.
Resolving deltas: 100% (63/63), done.
cd pgtune

Then we run PGTune against our Puppet Enterprise postgresql.conf:

./pgtune -i /opt/puppetlabs/server/data/postgresql/9.6/data/postgresql.conf
#------------------------------------------------------------------------------
# pgtune for version 8.4 run on 2018-08-19
# Based on 3882384 KB RAM, platform Linux, 100 clients and mixed workload
#------------------------------------------------------------------------------

default_statistics_target = 100
maintenance_work_mem = 224MB
checkpoint_completion_target = 0.9
effective_cache_size = 2816MB
work_mem = 18MB
wal_buffers = 16MB
checkpoint_segments = 32
shared_buffers = 896MB
max_connections = 100

These settings come back in a form for manually managing a postgresql.conf. Let's translate these values into Puppet Enterprise Hiera settings that can be placed in common.yaml to drive our PuppetDB:

---
puppet_enterprise::profile::database::maintenance_work_mem: 224MB
puppet_enterprise::profile::database::checking_completion_target = 0.9
puppet_enterprise::profile::database::effective_cache_size: 2816MB
puppet_enterprise::profile::database::work_mem: 18MB
puppet_enterprise::profile::database::wal_buffers: 16MB
puppet_enterprise::profile::database::checkpoint_segments: 32
puppet_enterprise::profile::database::shared_buffers: 896MB

# PgTune recommends just 100 max_connections, but Puppet Enterprise
# generally recommends a higher amount due to the number of nodes that
# can connect to the system. I'll tune it for that purpose.
puppet_enterprise::profile::database::max_connections: 400

When using open source, we'll instead want to lean on the puppetlabs/postgresql module that is a dependency of puppetlabs/puppetdb. Each value we want to set is an individual resource, and can be represented in Hiera at the PuppetDB level. I would not recommend putting these particular settings in common.yaml if you have other PostgreSQL servers in your environment:

---
postgresql::server::config_entries:
maintenance_work_mem: 224MB
checkpoint_completion_target: 0.9
effective_cache_size: 2816MB
work_mem: 18MB
wal_buffers: 16MB
checkpoint_segments: 32
shared_buffers: 896MB
max_connections: 400

Understanding these key concepts allows us to configure our individual nodes to maximize performance. For many users, this will be enough to run Puppet in their environment. For more extreme cases, we can turn to horizontal scaling, allowing more copies of our Puppetservers and PuppetDBs to support more agents.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.149.251.154