Replicating just one table is clearly not enough for a productive application. Also, there is usually no way to ensure that the data structure never changes. At some point it is simply necessary to deploy changes of the data structures (so called DDLs).
The problem now is that Slony relies heavily on triggers. A trigger can fire when a row in a table changes. This works for all tables—but, it does not work for system tables. So, if you deploy a new table or if you happen to change a column, there is no way for Slony to detect that. So, you have to run a script to deploy changes inside the cluster to make it work.
We need a slonik
script for that:
#!/bin/sh MASTERDB=db1 SLAVEDB=db2 HOST1=localhost HOST2=localhost DBUSER=hs slonik<<_EOF_ cluster name = first_cluster; node 1 admin conninfo = 'dbname=$MASTERDB host=$HOST1 user=$DBUSER'; node 2 admin conninfo = 'dbname=$SLAVEDB host=$HOST2 user=$DBUSER'; execute script ( filename = '/tmp/deploy_ddl.sql', event node = 1 ); _EOF_
The key to success is execute script
. We simply pass an SQL file to the call and tell it to consult node 1
. The content of the SQL file can be quite simple—it should simply list the DDLs we want to execute:
CREATE TABLE t_second (id int4, name text);
Running the file can be done just as before:
hs@hs-VirtualBox:~/slony$ ./slony_ddl.sh
The table will be deployed on both nodes. The following listing shows that the table has also made it to the second node, which proves that things have been working as expected:
db2=# d t_second Table "public.t_second" Column | Type | Modifiers --------+---------+----------- id | integer | name | text |
Of course, you can also create new tables without using Slony but this is not recommended. Adding columns to a table will definitely end up as disaster.
3.133.158.32