• Home
  • Services
    • HR Services
      • HR Advisory Services
      • Contract Staffing
      • HR Outsourcing Services
      • Manpower Supply Services
      • Payroll Processing
      • Permanent Placement
      • Recruitment and Placement Services
      • Recruitment Process Outsourcing
      • Staffing Agency Services
    • DBA Support
      • DBA Consultancy Services
      • PostgreSQL Support
    • Website Maintenance
    • Company Registration Services
    • Virtual Office Space Address
  • Company
    • FAQs
    • About Us
    • Contact
  • Office Space
    • Virtual Space
    • Co-working Space
    • Private Space
    • Meeting Room
    • Locations
    • Add Listing
    • Dashboard
  • Blogs
  • Careers
    • Jobseeker
    • Employer
  • Courses

    About Courses

    • List Of Courses
    • Become an Instructor
    Greenplum

    Greenplum

    $1,500.00
    Read More
    Have any question?
    (+91)8148383856
    info@rayafeel.com
    Login
    RayaFeeL
    • Home
    • Services
      • HR Services
        • HR Advisory Services
        • Contract Staffing
        • HR Outsourcing Services
        • Manpower Supply Services
        • Payroll Processing
        • Permanent Placement
        • Recruitment and Placement Services
        • Recruitment Process Outsourcing
        • Staffing Agency Services
      • DBA Support
        • DBA Consultancy Services
        • PostgreSQL Support
      • Website Maintenance
      • Company Registration Services
      • Virtual Office Space Address
    • Company
      • FAQs
      • About Us
      • Contact
    • Office Space
      • Virtual Space
      • Co-working Space
      • Private Space
      • Meeting Room
      • Locations
      • Add Listing
      • Dashboard
    • Blogs
    • Careers
      • Jobseeker
      • Employer
    • Courses

      About Courses

      • List Of Courses
      • Become an Instructor
      Greenplum

      Greenplum

      $1,500.00
      Read More

      Blog

      • Home
      • Blog
      • Blog
      • PostgreSQL Replication Setup , Failover , Rebuilding , Monitoring & Recoverability

      PostgreSQL Replication Setup , Failover , Rebuilding , Monitoring & Recoverability

      • Posted by bushra.rayafeel
      • Categories Blog
      • Date May 28, 2022
      • Comments 0 comment
      Prerequisites:
      1. Same version of the PostgreSQL Database must be installed on both servers.
      2. Configure the Password-less ssh authentication to use the “postgres” user.
      3. Production server must operate in WAL archiving enabled mode by setting archive_mode and archive_command in postgresql.conf file.
      4. Production and stand by servers always should have connectivity between them to transfer the archived WAL files from production.
      5. Set up your standby host’s environment and directory structure exactly the same as your primary.
      Step 1: Installation
         The first step would be to install PPAS (same version) on two different servers (running same OS/OS version). For putting up this blog, I tried these steps on 2 servers, each running Postgres Plus Advanced Server 9.2.1 on RHEL 6.1. For the sake of simplicity, instead of overwriting the default DB cluster/instance on secondary db server, I created a different db instance

      Primary DB Server:

      Name:dbserver1
      IP Address: 192.168.160.147
      Data directory: /opt/PostgresPlus/9.2AS/data
      Port: 5444

      Stand by DB Server

      Name:dbserver2
      IP Address: 192.168.160.150
      Data Directory: /opt/PostgresPlus/9.2AS/data2
      Port: 5222
      Step 2: Parameter Setup

      Change the below parameter:

       wal_level = hot_standby
       max_wal_senders = 3
       wal_keep_segments = 128 (optional/depending on load)
       replication_timeout = 5 sec (optional)
       hot_standby = on (required/effective only for hot stand by server)
      To ease the pain of back-up-restore, failover-failback I created two postgresql.conf backups on primary server (under data directory)
      1.   postgresql.conf.dbserver1
      2.   postgresql.conf.dbserver2
      Both the files are same, with only difference in the value for port (dbserver1=5444 and dbserver2=5222).
      The value for replication related parameters and hot_standby same in both the files. As the replication parameters are not going to cause any difference on secondary server unless you use cascaded replication and hot_standby value is ignored on Primary Server.

      Add a new value in pg_hba.conf

      host  replication   all   192.168.160.0/24      trust
      Step 3: Create recovery.conf
      Create two dummy recovery.conf files namely recovery.conf.dbserver1_is_master and recovery.conf.dbserver2_is_master.

      recovery.conf.dbserver1_is_master

      standby_mode = 'on'
      primary_conninfo = 'host=192.168.160.147 port=5444 user=enterprisedb password=ashnik'
      restore_command = 'scp enterprisedb@192.168.160.150:/mnt/arch/%f %p'
                  # optional
                 # needs archiving command to be enabled on primary
      recovery_target_timeline = 'latest'      #optional
      trigger_file = '/opt/PostgresPlus/9.2AS/data/recover.trigger'

      recovery.conf.dbserver2_is_master

      standby_mode = 'on'
      primary_conninfo = 'host=192.168.160.150 port=5222 user=enterprisedb password=ashnik'
      restore_command = 'scp enterprisedb@192.168.160.150:/mnt/arch/%f %p'
              # optional
             # needs archiving command to be enabled on primary
      recovery_target_timeline = 'latest'      #optional
      trigger_file = '/opt/PostgresPlus/9.2AS/data/recover.trigger'
      For the ease of management, you should keep the passwords same on both the servers. Here the password is used in plain text, but one can always use password files or md5 password.
      Step 4: Backup

      You need to take a cold backup of your primary server and restore it on secondary server. With rest of the files in the backup you will also get below files:

        postgresql.conf.dbserver1
        postgresql.conf.dbserver2
        recovery.conf.dbserver1_is_master
        recovery.conf.dbserver2_is_master
      Step 5: Setup of Standby Server
      Now copy the postgresql.conf.dbserver2 as postgresql.conf on dbserver2.
      Then copy the recover.conf.dbserver1_is_master as recovery.conf
                Now start the db cluster on secondary server first to make sure it is not going to lose any transaction. You will see an error in log complaining about non-availability of primary server, which can be ignored at this point. Confirm that secondary database instance is up and running and you can connect and fire read queries on the database.
      Step 6: Start-up the Primary Server
      Once sure, start the primary server.
      • Check the primary and secondary server. Connect to each of them to confirm the connectivity, role and read/write accessibility.
      • You can use “pg_ctl status” OS command and “pg_is_in_recovery()” SQL function to confirm the status of each database.
        Now let’s try to do a failover.
      Step 7: Create a Failure

      So for that, first there has to be a failure. On the primary site, fire a command

            kill -9 <>
      Now connect to the secondary database, check if you can do write operation or simply check if it’s still in recovery state by pg_is_in_recovery() function. So do you still get “t” as the output of the above command? Are you still not able to write operations on the secondary database?
      Well don’t be surprised, actually PostgreSQL does not do failover on its own. There have always been two school of thoughts about any failover mechanism, one which says “auto failover” and other which says “manual failover”. If given a choice, I always prefer a manual failover for DR site. This ensures that my DR site does not mistakenly assume a network failover as a disaster or a failure. Moreover, the DR site is not just database, one also needs to make sure the application and client connectivity is modified accordingly. Hence it’s best to keep it manual. Auto-failover is useful if you want to do hot-streaming replication for HA (which I would discuss in a later post in this series).
      Step 8: Switch Secondary/slave database to become read/write

      On the secondary server create the trigger file (as specified in recovery.conf)

      touch opt/PostgresPlus/9.2AS/data2/recovery.trigger
      Now connect again and check if your current live sessions/new sessions to the secondary database are able to do write operations to the database. If yes, then great! You just completed a successful failover.
      Step 9: Rebuilding the Master Database
      Now, we need to re-build the master (assuming that the master database server is up).

      First of all clean up the database cluster on old Primary server:

           rm -Rf /opt/PostgresPlus/9.2AS/data

      Now, take a backup from new primary (dbserver2) to rebuild the master:

      pg_basebackup -D /opt/PostgresPlus/9.2AS/data -h 192.168.160.150 -W -p 5222 -Fp --xlog-method=stream
      Once the backup is complete, copy the postgresql.conf.dbserver1 as postgresql.conf and then remove recovery.done and copy the recover.conf.dbserver2_is_master as recovery.conf.
      Step 10: Start the Primary DB as new slave

      Now start the db cluster on master node. Once the start-up is successful, make sure everything is alright by connecting to the database and firing the below command:

      SELECT pg_is_in_recovery(); #expected output is “t”

      Then fire the below command on dbserver1 and dbserver2 and both should be same:

      SELECT txid_current_snapshot();
      Carefully inspect the log files on secondary node (dbserver1) to confirm the recovery is in progress and there is no issues in the replication.

      Step 11:Monitoring

      $ psql ­c "SELECT pg_current_xlog_location()" ­h192.168.160.147
      (primary host)
      pg_current_xlog_location
      0/2000000
      (1 row)
      
      $ psql ­c "select pg_last_xlog_receive_location()" ­h192.168.160.150
      (standby host)
      pg_last_xlog_receive_location
      0/2000000
      (1 row)
      
      $ psql ­c "select pg_last_xlog_replay_location()" ­h192.168.160.150
      (standby host)
      pg_last_xlog_replay_location
      0/2000000
      (1 row)
      Step 12: Other ways to check streaming replication: the easiest way is “select now()-pg_last_xact_replay_timestamp();” at slave side. pg_last_xact_replay_timestamp() function gives time stamp of last transaction replayed during recovery, time at which the commit or abort WAL record for that transaction was generated on the primary. If no transactions have been replayed during recovery, this function returns NULL. Otherwise, if recovery is still in progress this will increase monotonically. If recovery has completed then this value will remain static at the value of the last transaction applied during that recovery. When the server has been started normally without recovery the function returns NULL. You can try with some operation on Master and then check the fuction output.
      If you want to check the delay manually, then go for below steps:

      Step 1:Need to create table on Primary using below command.

      create table stream_delay (tstamp timestamp without time zone );
      insert into stream_delay select now();

      Step 2: schedule the below command on primary to execute every minute on cronjob.

      update stream_delay set tstamp='now()';
      step 3: verify the delay on slave by selecting the “stream_delay” table.
                    It should show the last time that was updated in primary. Difference between this timestamp and current timestamp of slave server shows the time delay between Primary and slave.

      You can also check the progress of streaming replication by using ps command.#The displayed LSNs indicate the byte position that the standby server has written up to in the xlogs.

      [primary] $ ps ­ef | grep sender
      postgres  6879  6831  0 10:31 ?        00:00:00 postgres: wal sender
      process postgres 127.0.0.1(44663) streaming 0/2000000
      
      [standby] $ ps ­ef | grep receiver
      postgres  6878  6872  1 10:31 ? receiver process   streaming 0/2000000

      Tag:Postgres replication, postgresql

      • Share:
      bushra.rayafeel

      Previous post

      How to Rebuild the standalone postgres slave server ?
      May 28, 2022

      Next post

      PostgreSQL Services & Support
      May 28, 2022

      Leave A Reply

      You must be logged in to post a comment.

      Connect with



      Search

      ADVERTISEMENT

      Latest Posts

      HOW VIRTUAL OFFICE IS GOING TO CHANGE YOUR BUSINESS STRATEGIES
      21Jun2022
      Take the best advantage of your Virtual Office
      21Jun2022
      Let’s discuss how corporates can improve their productivity through virtual office spaces?
      21Jun2022
      Virtual Office Space & Address Services
      21Jun2022
      HOW VIRTUAL OFFICES ARE TRANSFORMING BUSINESSES
      21Jun2022
      PostgreSQL

      PostgreSQL

      $800.00 $500.00
      Greenplum

      Greenplum

      $1,500.00
      Oracle Database

      Oracle Database

      $350.00
      2ndquadrant.in

      (+91) 8838953252

      ITsupport@rayafeel.com

      Company

      • About Us
      • Our Team

      COURSES

      • List Of Course
      • Become An Instructor

      Support

      • DBA Support
      • Consultancy Services

      Recommend

      • Login
      • Contact Us

      IT Services by rayafeel.com. Powered by Rayafeel Technologies Pvt Ltd.

      • Privacy
      • Terms

      Become An Instructor?

      Join thousand of instructors and earn money hassle free!

      Get Started Now

      Connect with

      Login with Google

      Login with your site account

      Connect with



      Lost your password?