1. Same version of the PostgreSQL Database must be installed on both servers.
2. Configure the Password-less ssh authentication to use the “postgres” user.
3. Production server must operate in WAL archiving enabled mode by setting archive_mode and archive_command in postgresql.conf file.
4. Production and stand by servers always should have connectivity between them to transfer the archived WAL files from production.
5. Set up your standby host’s environment and directory structure exactly the same as your primary.
Step 1: Installation
The first step would be to install PPAS (same version) on two different servers (running same OS/OS version). For putting up this blog, I tried these steps on 2 servers, each running Postgres Plus Advanced Server 9.2.1 on RHEL 6.1. For the sake of simplicity, instead of overwriting the default DB cluster/instance on secondary db server, I created a different db instance
Primary DB Server:
Name:dbserver1 IP Address: 192.168.160.147 Data directory: /opt/PostgresPlus/9.2AS/data Port: 5444
Stand by DB Server
Name:dbserver2 IP Address: 192.168.160.150 Data Directory: /opt/PostgresPlus/9.2AS/data2 Port: 5222
Step 2: Parameter Setup
Change the below parameter:
wal_level = hot_standby max_wal_senders = 3 wal_keep_segments = 128 (optional/depending on load) replication_timeout = 5 sec (optional) hot_standby = on (required/effective only for hot stand by server)
To ease the pain of back-up-restore, failover-failback I created two postgresql.conf backups on primary server (under data directory)
Both the files are same, with only difference in the value for port (dbserver1=5444 and dbserver2=5222).
The value for replication related parameters and hot_standby same in both the files. As the replication parameters are not going to cause any difference on secondary server unless you use cascaded replication and hot_standby value is ignored on Primary Server.
Add a new value in pg_hba.conf
host replication all 192.168.160.0/24 trust
Step 3: Create recovery.conf
Create two dummy recovery.conf files namely recovery.conf.dbserver1_is_master and recovery.conf.dbserver2_is_master.
standby_mode = 'on' primary_conninfo = 'host=192.168.160.147 port=5444 user=enterprisedb password=ashnik' restore_command = 'scp email@example.com:/mnt/arch/%f %p' # optional # needs archiving command to be enabled on primary recovery_target_timeline = 'latest' #optional trigger_file = '/opt/PostgresPlus/9.2AS/data/recover.trigger'
standby_mode = 'on' primary_conninfo = 'host=192.168.160.150 port=5222 user=enterprisedb password=ashnik' restore_command = 'scp firstname.lastname@example.org:/mnt/arch/%f %p' # optional # needs archiving command to be enabled on primary recovery_target_timeline = 'latest' #optional trigger_file = '/opt/PostgresPlus/9.2AS/data/recover.trigger'
For the ease of management, you should keep the passwords same on both the servers. Here the password is used in plain text, but one can always use password files or md5 password.
Step 4: Backup
You need to take a cold backup of your primary server and restore it on secondary server. With rest of the files in the backup you will also get below files:
postgresql.conf.dbserver1 postgresql.conf.dbserver2 recovery.conf.dbserver1_is_master recovery.conf.dbserver2_is_master
Step 5: Setup of Standby Server
Now copy the postgresql.conf.dbserver2 as postgresql.conf on dbserver2.
Then copy the recover.conf.dbserver1_is_master as recovery.conf
Now start the db cluster on secondary server first to make sure it is not going to lose any transaction. You will see an error in log complaining about non-availability of primary server, which can be ignored at this point. Confirm that secondary database instance is up and running and you can connect and fire read queries on the database.
Step 6: Start-up the Primary Server
Once sure, start the primary server.
- Check the primary and secondary server. Connect to each of them to confirm the connectivity, role and read/write accessibility.
- You can use “pg_ctl status” OS command and “pg_is_in_recovery()” SQL function to confirm the status of each database.
Now let’s try to do a failover.
Step 7: Create a Failure
So for that, first there has to be a failure. On the primary site, fire a command
kill -9 <>
Now connect to the secondary database, check if you can do write operation or simply check if it’s still in recovery state by pg_is_in_recovery() function. So do you still get “t” as the output of the above command? Are you still not able to write operations on the secondary database?
Well don’t be surprised, actually PostgreSQL does not do failover on its own. There have always been two school of thoughts about any failover mechanism, one which says “auto failover” and other which says “manual failover”. If given a choice, I always prefer a manual failover for DR site. This ensures that my DR site does not mistakenly assume a network failover as a disaster or a failure. Moreover, the DR site is not just database, one also needs to make sure the application and client connectivity is modified accordingly. Hence it’s best to keep it manual. Auto-failover is useful if you want to do hot-streaming replication for HA (which I would discuss in a later post in this series).
Step 8: Switch Secondary/slave database to become read/write
On the secondary server create the trigger file (as specified in recovery.conf)
Now connect again and check if your current live sessions/new sessions to the secondary database are able to do write operations to the database. If yes, then great! You just completed a successful failover.
Step 9: Rebuilding the Master Database
Now, we need to re-build the master (assuming that the master database server is up).
First of all clean up the database cluster on old Primary server:
rm -Rf /opt/PostgresPlus/9.2AS/data
Now, take a backup from new primary (dbserver2) to rebuild the master:
pg_basebackup -D /opt/PostgresPlus/9.2AS/data -h 192.168.160.150 -W -p 5222 -Fp --xlog-method=stream
Once the backup is complete, copy the postgresql.conf.dbserver1 as postgresql.conf and then remove recovery.done and copy the recover.conf.dbserver2_is_master as recovery.conf.
Step 10: Start the Primary DB as new slave
Now start the db cluster on master node. Once the start-up is successful, make sure everything is alright by connecting to the database and firing the below command:
SELECT pg_is_in_recovery(); #expected output is “t”
Then fire the below command on dbserver1 and dbserver2 and both should be same:
Carefully inspect the log files on secondary node (dbserver1) to confirm the recovery is in progress and there is no issues in the replication.
$ psql c "SELECT pg_current_xlog_location()" h192.168.160.147 (primary host) pg_current_xlog_location 0/2000000 (1 row) $ psql c "select pg_last_xlog_receive_location()" h192.168.160.150 (standby host) pg_last_xlog_receive_location 0/2000000 (1 row) $ psql c "select pg_last_xlog_replay_location()" h192.168.160.150 (standby host) pg_last_xlog_replay_location 0/2000000 (1 row)
Step 12: Other ways to check streaming replication: the easiest way is “select now()-pg_last_xact_replay_timestamp();” at slave side. pg_last_xact_replay_timestamp() function gives time stamp of last transaction replayed during recovery, time at which the commit or abort WAL record for that transaction was generated on the primary. If no transactions have been replayed during recovery, this function returns NULL. Otherwise, if recovery is still in progress this will increase monotonically. If recovery has completed then this value will remain static at the value of the last transaction applied during that recovery. When the server has been started normally without recovery the function returns NULL. You can try with some operation on Master and then check the fuction output.
If you want to check the delay manually, then go for below steps:
Step 1:Need to create table on Primary using below command.
create table stream_delay (tstamp timestamp without time zone ); insert into stream_delay select now();
Step 2: schedule the below command on primary to execute every minute on cronjob.
update stream_delay set tstamp='now()';
step 3: verify the delay on slave by selecting the “stream_delay” table.
It should show the last time that was updated in primary. Difference between this timestamp and current timestamp of slave server shows the time delay between Primary and slave.
You can also check the progress of streaming replication by using ps command.#The displayed LSNs indicate the byte position that the standby server has written up to in the xlogs.
[primary] $ ps ef | grep sender postgres 6879 6831 0 10:31 ? 00:00:00 postgres: wal sender process postgres 127.0.0.1(44663) streaming 0/2000000 [standby] $ ps ef | grep receiver postgres 6878 6872 1 10:31 ? receiver process streaming 0/2000000