• Home
  • Services
    • HR Services
      • HR Advisory Services
      • Contract Staffing
      • HR Outsourcing Services
      • Manpower Supply Services
      • Payroll Processing
      • Permanent Placement
      • Recruitment and Placement Services
      • Recruitment Process Outsourcing
      • Staffing Agency Services
    • DBA Support
      • DBA Consultancy Services
      • PostgreSQL Support
    • Website Maintenance
    • Company Registration Services
    • Virtual Office Space Address
  • Company
    • FAQs
    • About Us
    • Contact
  • Office Space
    • Virtual Space
    • Co-working Space
    • Private Space
    • Meeting Room
    • Locations
    • Add Listing
    • Dashboard
  • Blogs
  • Careers
    • Jobseeker
    • Employer
  • Courses

    About Courses

    • List Of Courses
    • Become an Instructor
    Greenplum

    Greenplum

    $1,500.00
    Read More
    Have any question?
    (+91)8148383856
    info@rayafeel.com
    Login
    RayaFeeL
    • Home
    • Services
      • HR Services
        • HR Advisory Services
        • Contract Staffing
        • HR Outsourcing Services
        • Manpower Supply Services
        • Payroll Processing
        • Permanent Placement
        • Recruitment and Placement Services
        • Recruitment Process Outsourcing
        • Staffing Agency Services
      • DBA Support
        • DBA Consultancy Services
        • PostgreSQL Support
      • Website Maintenance
      • Company Registration Services
      • Virtual Office Space Address
    • Company
      • FAQs
      • About Us
      • Contact
    • Office Space
      • Virtual Space
      • Co-working Space
      • Private Space
      • Meeting Room
      • Locations
      • Add Listing
      • Dashboard
    • Blogs
    • Careers
      • Jobseeker
      • Employer
    • Courses

      About Courses

      • List Of Courses
      • Become an Instructor
      Greenplum

      Greenplum

      $1,500.00
      Read More

      Blog

      • Home
      • Blog
      • Blog
      • Postgres database cloud Migration by using crontab job sheduler

      Postgres database cloud Migration by using crontab job sheduler

      • Posted by bushra.rayafeel
      • Categories Blog
      • Date May 28, 2022
      • Comments 0 comment

      First Note Down the server and database information for which server and which database do you want to schecdule backup(crontab) jobs. For example i’m taking following servers and databases for scheduling backup jobs.

      SERVER IP        DATABASE
      10.10.10.1          chennai
      10.10.10.2          mumbai
      10.10.10.3          Pune

       10.10.10.4          delhi

      ========================================================================
      DATABASE SIZE CHECKING: 

      • Before Taking backup mandatory to checking database size and free disk space available or not (For backup location).
      • pg_size_pretty function which converts the size in bytes to human understandable format without pg_size_pretty means size will be show bytes format.
      1.postgres=# SELECT pg_size_pretty(pg_database_size('chennai'));
       pg_size_pretty
      ----------------
       186 GB 
      (1 row)
      
      2.postgres=# SELECT pg_size_pretty(pg_database_size('mumbai'));
       pg_size_pretty
      ----------------
       338 GB
      (1 row)
      
      
      3.postgres=# SELECT pg_size_pretty(pg_database_size('Pune'));
       pg_size_pretty
      ----------------
       194 GB
      (1 row)
      
      4.postgres=# SELECT pg_size_pretty(pg_database_size('delhi'));
       pg_size_pretty
      ----------------
       55 GB
      (1 row)
      

      ========================================================================
      We are using custom backup format(Fc) 90-93% will be reduced to original file size, it taking time for 4-9 hours, backup time depends on transaction, For Example if your database size 200GB also if transaction is less backup will be complete 3-5 hours if you use parallel your backup will be two time faster than normal (-j 2 two time faster than normal,-j 4 four time faster than normal) but parallel option only available from 9.3.

      POSTGRESQL BACKUP SCRIPT FOR DIFFERENT SERVER DATABASES :


      PREPARE THE CRONTAB.
      list out the crontab jobs first before adding or editing jobs.
      bash-4.1$ crontab -l

      For Editing or adding the crontab Jobs.

      bash-4.1$ crontab -e

      Before  scheduling crontab job just check the server timing becouse your server timing and Desktop timing may be different.
      bash-4.1$ date
      Thu Oct 19 13:36:19 IST 2017  here 13 is 1 pm
      —————————————
      1.Backup of Chennai:

      53 12 19 * * sh /home/postgres/eds/chennai.sh
      
      ----chennai.sh-------
      export PATH=/opt/PostgreSQL/9.3/bin:$PATH
      export PGDATA=/opt/PostgreSQL/9.3/data
      export PGDATABASE=postgres
      export PGUSER=postgres
      export PGPORT=5432
      export PGLOCALEDIR=/opt/PostgreSQL/9.3/share/locale
      export MANPATH=$MANPATH:/opt/PostgreSQL/9.3/share/man
      pg_dump -Fc chennai > /backup/chennai.dump

      2.Backup of  Mumbai:

      40 12 19 * * sh /home/postgres/edbscripts/mumbai.sh
      
      -----mumbai.sh-----
      export PATH=/opt/PostgreSQL/9.2/bin:$PATH
      export PGDATA=/data
      export PGDATABASE=postgres
      export PGUSER=postgres
      export PGPORT=5432
      export PGLOCALEDIR=/opt/PostgreSQL/9.2/share/locale
      export MANPATH=$MANPATH:/opt/PostgreSQL/9.2/share/man
      pg_dump -Fc mumbai > /backup/mumbai.dump


      3.Backup of  Pune:

      40 13 19 * * sh /home/postgres/eds/pune.sh
      
      ----pune.sh----
      export PATH=/opt/PostgreSQL/9.3/bin:$PATH
      export PGDATA=/opt/PostgreSQL/9.3/data
      export PGDATABASE=postgres
      export PGUSER=postgres
      export PGPORT=5432
      export PGLOCALEDIR=/opt/PostgreSQL/9.3/share/locale
      export MANPATH=$MANPATH:/opt/PostgreSQL/9.3/share/man
      pg_dump -Fc Pune  -f /backup/pune.dump


      4.Backup of delhi:

      37 14 19 * * sh /home/postgres/edbscripts/delhi.sh
      
      ----delhi.sh----
      export PATH=/opt/PostgreSQL/9.2/bin:$PATH
      export PGDATA=/opt/PostgreSQL/9.2/data
      export PGDATABASE=postgres
      export PGUSER=postgres
      export PGPORT=5432
      export PGLOCALEDIR=/opt/PostgreSQL/9.2/share/locale
      export MANPATH=$MANPATH:/opt/PostgreSQL/9.2/share/man
      pg_dump -Fc delhi  -f /backup/delhi.dump


      NOTE: After backup complete comments the crontab section otherwise this job will run every month of 19,2.37pm

      ========================================================================
      Rename the database:

      postgres=# ALTER DATABASE  delhi RENAME TO  delhi_old;
      ALTER DATABASE

      Also ensure that there are no other clients connected to the database at the time.
      ========================================================================
      RESTORE THE DATABASE:

      1.Restoring Chennai:

      postgres=# ALTER DATABASE  chennai RENAME TO  chennai_old;
      ALTER DATABASE
      postgres=# create database chennai;
      CREATE DATABASE

      monitor the master and slave when restoring backup whether growing or not.
      MASTER:

      postgres=# SELECT pg_size_pretty(pg_database_size('chennai'));
       pg_size_pretty
      ----------------
       7233 kB
      (1 row)

      SLAVE:

      postgres=# SELECT pg_size_pretty(pg_database_size('chennai'));
       pg_size_pretty
      ----------------
       7233 kB
      (1 row)

      Before setting the crontab just check the date
      bash-4.2$ date
      Sat Oct 21 08:24:39 IST 2017

      AT CRONTAB:

      crontab -e
      53 12 21 * * sh /opt/PostgreSQL/9.6/cron_script/chennai_restore.sh
      ----chennai_restore.sh-------
      export PATH=/opt/PostgreSQL/9.6/bin:$PATH
      export PGDATA=/opt/PostgreSQL/9.6/data
      export PGDATABASE=postgres
      export PGUSER=postgres
      export PGPORT=5432
      export PGLOCALEDIR=/opt/PostgreSQL/9.6/share/locale
      export MANPATH=$MANPATH:/opt/PostgreSQL/9.6/share/man
      
      
      TODAY=$(date +"%m-%d-%Y-%T")
      LOG=/opt/PostgreSQL/9.6/cron_script
      echo "chennai restore start Time: `date +%d-%m-%y-%T`" >> $LOG/chennai_$TODAY.log
      pg_restore -d mhrornas  /backup/chennai.dump
      echo "chennai restore End Time: `date +%d-%m-%y-%T`" >> $LOG/chennai_$TODAY.log

      2.Restoring  Mumbai:

      postgres=# ALTER DATABASE  mumbai  RENAME TO  mumbai_old;
      ALTER DATABASE
      postgres=# create database mumbai;
      CREATE DATABASE

      monitor the master and slave when restoring backup whether growing or not.
      MASTER:

      postgres=# SELECT pg_size_pretty(pg_database_size('mumbai'));
       pg_size_pretty
      ----------------
       7233 kB
      (1 row)

      SLAVE:

      postgres=# SELECT pg_size_pretty(pg_database_size('mumbai'));
       pg_size_pretty
      ----------------
       7233 kB
      (1 row)

      Before setting the crontab just check the date
      bash-4.2$ date
      Sat Oct 21 08:24:39 IST 2017

      AT CRONTAB:

      crontab -e
      26 11 21 * * sh /opt/PostgreSQL/9.6/cron_script/mumbai_restore.sh
      ----mumbai_restore.sh----
      export PATH=/opt/PostgreSQL/9.6/bin:$PATH
      export PGDATA=/opt/PostgreSQL/9.6/data
      export PGDATABASE=postgres
      export PGUSER=postgres
      export PGPORT=5432 
      export PGLOCALEDIR=/opt/PostgreSQL/9.6/share/locale
      export MANPATH=$MANPATH:/opt/PostgreSQL/9.6/share/man
      
      pg_restore  -d mumbai  /backup/mumbai.dump


      3.Restoring  Pune:

      Rename the Existing database:

      postgres=# ALTER DATABASE  Pune  RENAME TO  Pune_old;
      ALTER DATABASE
      postgres=# create database Pune;
      CREATE DATABASE

      monitor the master and slave when restoring backup whether growing or not.
      MASTER:

      postgres=# SELECT pg_size_pretty(pg_database_size('Pune'));
       pg_size_pretty
      ----------------
       7233 kB
      (1 row)
      
      SLAVE:
      postgres=# SELECT pg_size_pretty(pg_database_size('Pune'));
       pg_size_pretty
      ----------------
       7233 kB
      (1 row)

      Before setting the crontab just check the date
      bash-4.2$ date
      Sat Oct 21 12:47:23 IST 2017

      AT CRONTAB:

      crontab -e
       50 12 21 * * sh /opt/PostgreSQL/9.6/cron_script/pune_restore.sh
      ----pune_restore.sh----
      export PATH=/opt/PostgreSQL/9.6/bin:$PATH
      export PGDATA=/opt/PostgreSQL/9.6/data
      export PGDATABASE=postgres
      export PGUSER=postgres
      export PGPORT=5432
      export PGLOCALEDIR=/opt/PostgreSQL/9.6/share/locale
      export MANPATH=$MANPATH:/opt/PostgreSQL/9.6/share/man
      
      pg_restore -d Pune   /backup/pune.dump


      4.Restoring delhi:

      postgres=# ALTER DATABASE  delhi RENAME TO  delhi_old;
      ALTER DATABASE
      postgres=# create database delhi;
      CREATE DATABASE

      AT CRONTAB:

      crontab -e
       25 23 20 * * sh /opt/PostgreSQL/9.6/cron_script/delhi_restore.sh
      ----delhi_restore.sh----
      export PATH=/opt/PostgreSQL/9.6/bin:$PATH
      export PGDATA=/opt/PostgreSQL/9.6/data
      export PGDATABASE=postgres
      export PGUSER=postgres
      export PGPORT=5432
      export PGLOCALEDIR=/opt/PostgreSQL/9.6/share/locale
      export MANPATH=$MANPATH:/opt/PostgreSQL/9.6/share/man
      
      pg_restore  -d delhi   /backup/delhi.dump

      ========================================================================

      NOTE:
      • Restored database smaller than original
      • By dumping and restoring the database, you are recreating a DB without all this free space. This is essentially what the VACUUM FULL command does – it rewrites the current data into a new file, then deletes the old file.
      •  if you inserted 100 rows into a table, then deleted every row with an odd numbered ID, the DBMS could write out a new table with only 50 rows, but it’s more efficient for it to simply mark the deleted rows as free space and reuse them when you next insert a row. Therefore the table takes up twice as much space as is currently needed.
      [root@MHSDC-SCDLR-MONTR backup]# scp delhi.dump root@10.187.200.20:/backups/                                                                                                                    
       root@10.187.200.15's password:
      delhi.dump   
      [root@MHSDC-SCDLR-DB5 backup]# scp mumbai.dump root@10.187.200.20:/backups/

      changing Permission:

      [root@DBServers1 backup]# chown postgres:postgres pune.dump

      Tag:backup, postgresql

      • Share:
      bushra.rayafeel

      Previous post

      Writing PostgreSQL Script for Enabling Audit Related Parameter
      May 28, 2022

      Next post

      Taking PostgreSQL SEQUENCE Definition backup with their table File name
      May 28, 2022

      Leave A Reply

      You must be logged in to post a comment.

      Connect with



      Search

      ADVERTISEMENT

      Latest Posts

      HOW VIRTUAL OFFICE IS GOING TO CHANGE YOUR BUSINESS STRATEGIES
      21Jun2022
      Take the best advantage of your Virtual Office
      21Jun2022
      Let’s discuss how corporates can improve their productivity through virtual office spaces?
      21Jun2022
      Virtual Office Space & Address Services
      21Jun2022
      HOW VIRTUAL OFFICES ARE TRANSFORMING BUSINESSES
      21Jun2022
      PostgreSQL

      PostgreSQL

      $800.00 $500.00
      Greenplum

      Greenplum

      $1,500.00
      Oracle Database

      Oracle Database

      $350.00
      2ndquadrant.in

      (+91) 8838953252

      ITsupport@rayafeel.com

      Company

      • About Us
      • Our Team

      COURSES

      • List Of Course
      • Become An Instructor

      Support

      • DBA Support
      • Consultancy Services

      Recommend

      • Login
      • Contact Us

      IT Services by rayafeel.com. Powered by Rayafeel Technologies Pvt Ltd.

      • Privacy
      • Terms

      Become An Instructor?

      Join thousand of instructors and earn money hassle free!

      Get Started Now

      Connect with

      Login with Google

      Login with your site account

      Connect with



      Lost your password?