• Home
  • Services
    • DBA Support
    • DBA Consultancy Services
    • PostgreSQL Support
    • Website Maintenance
  • Courses

    About Courses

    • List Of Courses
    • Become an Instructor
    Greenplum Database

    Greenplum Database

    $2,000.00 $1,500.00
    Read More
  • Company
    • FAQs
    • About Us
    • Contact
  • Events
  • Portfolio
  • Blogs
    • Blog – RayaFeeL
    • Blog – PostgreSQL Support
    • Blog – PostgreSQL Migration
    • Blog – All DB’s
    • Blog – Linux
    • Blog – Medical Coding
      • Cart

        0

    Have any question?
    (+91)8838953252
    ITsupport@rayafeel.com
    RegisterLogin
    RayaFeeL
    • Home
    • Services
      • DBA Support
      • DBA Consultancy Services
      • PostgreSQL Support
      • Website Maintenance
    • Courses

      About Courses

      • List Of Courses
      • Become an Instructor
      Greenplum Database

      Greenplum Database

      $2,000.00 $1,500.00
      Read More
    • Company
      • FAQs
      • About Us
      • Contact
    • Events
    • Portfolio
    • Blogs
      • Blog – RayaFeeL
      • Blog – PostgreSQL Support
      • Blog – PostgreSQL Migration
      • Blog – All DB’s
      • Blog – Linux
      • Blog – Medical Coding
        • Cart

          0

      Databases

      • Home
      • All courses
      • Databases
      • PostgreSQL Database
      CoursesIT & SoftwareDatabasesPostgreSQL Database
      • BASIC POSTGRESQL
        6
        • Lecture1.1
          PostgreSQL Brief History 30 min
        • Lecture1.2
          PostgreSQL Features 20 min
        • Lecture1.3
          PostgreSQL Monitoring Tools Overview 30 min
        • Lecture1.4
          PostgreSQL – Oracle Vs PostgreSQL 30 min
        • Lecture1.5
          PostgreSQL Data Types 30 min
        • Lecture1.6
          PostgreSQL Maximum Objects Size 30 min
      • VMWARE & POSTGRESQL INSTALLATION
        9
        • Lecture2.1
          Vmware Installation 30 min
        • Lecture2.2
          Creating Linux Virtual Machine 01 hour
        • Lecture2.3
          PostgreSQL Installation Types 30 min
        • Lecture2.4
          PostgreSQL GUI Installation 30 min
        • Lecture2.5
          PostgreSQL Text Mode Installation 30 min
        • Lecture2.6
          PostgreSQL Unattended Mode Installation 30 min
        • Lecture2.7
          Configure the network & Disk Partition 30 min
        • Lecture2.8
          How to install PostgreSQL10 On Linux Server ? 20 min
        • Lecture2.9
          PostgreSQL -11 Installation (rpm & source code) 30 min
      • POSTGRESQL DATABASE
        6
        • Lecture3.1
          Connect Postgres Server 10 min
        • Lecture3.2
          PostgreSQL startup / shutdown /restart the postgresql server 30 min
        • Lecture3.3
          PostgreSQL .bash_profile Set Up 30 min
        • Lecture3.4
          PostgreSQL Database Creation 30 min
        • Lecture3.5
          PostgreSQL Connect Database 30 min
        • Lecture3.6
          PostgreSQL Drop Database 30 min
      • POSTGRESQL TABLE
        16
        • Lecture4.1
          PostgreSQL Languages 30 min
        • Lecture4.2
          PostgreSQL Create Table 30 min
        • Lecture4.3
          PostgreSQL Select Table 30 min
        • Lecture4.4
          PostgreSQL Alter Table 30 min
        • Lecture4.5
          PostgreSQL Drop Table 30 min
        • Lecture4.6
          PostgreSQL Truncate Table 30 min
        • Lecture4.7
          PostgreSQL Rename 30 min
        • Lecture4.8
          PostgreSQL Comment 30 min
        • Lecture4.9
          PostgreSQL Insert 30 min
        • Lecture4.10
          PostgreSQL Update 30 min
        • Lecture4.11
          PostgreSQL Table Delete 30 min
        • Lecture4.12
          PostgreSQL Merge Table 30 min
        • Lecture4.13
          PostgreSQL UNION 30 min
        • Lecture4.14
          PostgreSQL UNION ALL 30 min
        • Lecture4.15
          PostgreSQL INTERSECT 30 min
        • Lecture4.16
          PostgreSQL DISTINCT 30 min
      • USER/OBJECTS PRIVILEGE AND ROLES ASIGNING
        3
        • Lecture5.1
          PostgreSQL Grant 30 min
        • Lecture5.2
          PostgreSQL Revoke 30 min
        • Lecture5.3
          Postgres user creation and restrict DDL statement & other database access 30 min
      • TRANSACTIONS - MVCC
        3
        • Lecture6.1
          PostgreSQL Commit 30 min
        • Lecture6.2
          PostgreSQL Rollback 30 min
        • Lecture6.3
          PostgreSQL Savepoint 30 min
      • POSTGRESQL USER/SCHEMA MANAGEMENT
        9
        • Lecture7.1
          PostgreSQL User Creation 30 min
        • Lecture7.2
          PostgreSQL User Creation Using Utility 30 min
        • Lecture7.3
          PostgreSQL Drop user 30 min
        • Lecture7.4
          PostgreSQL Drop User Using Utility 30 min
        • Lecture7.5
          PostgreSQL Password Changing and Views 30 min
        • Lecture7.6
          PostgreSQL Group/role 30 min
        • Lecture7.7
          Alter PostgreSQL User/Role/group 30 min
        • Lecture7.8
          PostgreSQL Schema 30 min
        • Lecture7.9
          PostgreSQL user creation and restrict DDL & database access 30 min
      • POSTGRESQL CONSTRAINTS
        6
        • Lecture8.1
          PostgreSQL Constraints 30 min
        • Lecture8.2
          PostgreSQL Primary Key 30 min
        • Lecture8.3
          PostgreSQL UNIQUE Constraint 30 min
        • Lecture8.4
          PostgreSQL CHECK Constraint 30 min
        • Lecture8.5
          PostgreSQL NOT NULL Constraint 30 min
        • Lecture8.6
          PostgreSQL Foreign Key 30 min
      • POSTGRESQL ADVANCE DATA TYPE
        5
        • Lecture9.1
          PostgreSQL DOMAIN Data Type 30 min
        • Lecture9.2
          PostgreSQL Alter Domain 30 min
        • Lecture9.3
          PostgreSQL Drop DOMAIN 30 min
        • Lecture9.4
          PostgreSQL Json Data Type 30 min
        • Lecture9.5
          PostgreSQL Sequence 30 min
      • POSTGRESQL VIEWS
        1
        • Lecture10.1
          How to Create PostgreSQL View 30 min
      • POSTGRESQL MONITORING OBJECT USUAGE/SIZE
        1
        • Lecture11.1
          How to Get Table Size, Database Size, Indexes Size, schema Size, Tablespace Size, column Size in PostgreSQL Database 30 min
      • POSTGRESQL DATABASE ARCHITECTURE
        4
        • Lecture12.1
          PostgreSQL Tablespace 30 min
        • Lecture12.2
          PostgreSQL UnderStanding Of Objects 30 min
        • Lecture12.3
          PostgreSQL Architecture 30 min
        • Lecture12.4
          PostgreSQL Query Flow 30 min
      • POSTGRESQL BACKUP AND RECOVERY
        13
        • Lecture13.1
          PostgreSQL Backup & Recovery Overview 30 min
        • Lecture13.2
          PostgreSQL Database Backup Tools – Pg_dump 30 min
        • Lecture13.3
          PostgreSQL Database Backup Tools – Pg_dumpall 30 min
        • Lecture13.4
          PostgreSQL Database Backup Tools – Pg_basebackup 30 min
        • Lecture13.5
          PostgreSQL COPY data From Text File 30 min
        • Lecture13.6
          PostgreSQL COPY data to Text File 30 min
        • Lecture13.7
          PostgreSQL Clone Table 20 min
        • Lecture13.8
          Postgres Database Schema Migration 30 min
        • Lecture13.9
          PostgreSQL Database clone/migration 30 min
        • Lecture13.10
          PostgreSQL Compress backup format 30 min
        • Lecture13.11
          PostgreSQL Archivelog 30 min
        • Lecture13.12
          PostgreSQL Point In Time Recovery 30 min
        • Lecture13.13
          Taking Backup On Postgres Slave (Standby) Server 30 min
      • POSTGRESQL PERFORMANCE TUNING
        5
        • Lecture14.1
          PostgreSQL Index 30 min
        • Lecture14.2
          PostgreSQL Reindex 30 min
        • Lecture14.3
          PostgreSQL PerformanceTuning 30 min
        • Lecture14.4
          Understanding Postgres VACUUM | VACUUM FULL | VACUUM ANALYZE 30 min
        • Lecture14.5
          Postgres Autovacuum Configuration/Setup 30 min
      • HIGH AVAILABILITY, LOAD BALANCING, AND REPLICATION
        11
        • Lecture15.1
          POSTGRESQL STREAMING REPLICATION 30 min
        • Lecture15.2
          SSL Streaming replication with PostgreSQL 10 30 min
        • Lecture15.3
          How to make Linux partition and configure the network on Redhat Linux7.3 30 min
        • Lecture15.4
          How To Fix Firewall Issues while connecting the other server? 30 min
        • Lecture15.5
          How to install the PostgreSQL10 On Linux ? 30 min
        • Lecture15.6
          How to Configure the cascade replication On PostgreSQL 10.3 ? 30 min
        • Lecture15.7
          How to add extra one slave an existing PostgreSQL cascade replication without down time ? 30 min
        • Lecture15.8
          PostgreSQL Switchover 30 min
        • Lecture15.9
          Postgres Failover 30 min
        • Lecture15.10
          Postgres Upgrade 30 min
        • Lecture15.11
          PostgreSQL Upgrade 9.5 to 11.3 . 30 min

        PostgreSQL Point In Time Recovery

        • PostgreSQL “Point-in-time Recovery” (PITR) also called as incremental database backup , online backup or may be archive backup. The PostgreSQL server records all users’ data modification transaction like insert, update or delete and write it into a file call write-ahead (WAL) log file. This mechanism use the history records stored in WAL file to do roll-forward changes made since last database full backup.
        • It is backup the latest archivelog since the last backup instead of full database backup.

        Advantages

        • Zero down time – The incremental database backup is important to critical system that can not afford even a minute down time. With Point-in-time Recovery, database backup down time can totally eliminated because this mechanism can make database backup and system access happened at the same time.
        • Save storage size – with incremental database backup, we backup the latest archive log file since last backup instead of full database backup everyday.

        Point-in-time Recovery (Incremental Backup) in PostgreSQL server.
        Backup steps:

        • Modify postgresql.conf to support archive log
        • Make a base backup (full database backup)
        • Backup base backup to remote storage.
        • Backup WAL (archive log files) to remote storage (continuous process)

        Point-in-time Recovery Steps:

        • Extract files from base backup
        • Copy files from pg_xlog folder
        • Create recovery.conf file
        • Start Recover

        1) Create a testing database cluster Using initdb, all database files under /usr/local/pgsql/pgDataPITR/

        -bash-3.2$ pwd/opt/PostgresPlus/9.1AS/bin
        -bash-3.2$ initdb start -D /u02/data1/
        
        Start the database-bash-3.2$ ./pg_ctl start -D /u02/data1

        2) Make change in Postgresql configuration file (postgresql.conf), we need to make some changes in postgresql.conf file to tell PostgreSQL how to copy or archive WAL files that generated from PostgreSQL server.

        archive directory:
        -bash-3.2$ mkdir -p /u02/ssslocation/pgpitr/walbkp 
        
        Backup Data directory(#tar -cvzf u02/ssslocation/pgpitr/databkp/basebkp.tar.gz /u02/data1/):
        -bash-3.2$ mkdir -p /u02/ssslocation/pgpitr/databkp 
        
        Modify postgresql.conf
        -bash-3.2$ vi /u02/data1/postgresql.conf 
        archive_mode = on               # allows archiving to be done(change requires restart)
        archive_command = 'cp %p /u02/ssslocation/pgpitr/walbkp/%f' # command to use to archive a logfile segment
        wal_level = hot_standby         # minimal, archive, or hot_standby
        
        Restart the database:
        -bash-3.2$ ./pg_ctl stop -D /u02/data1 -m i
        waiting for server to shut down....LOG:  received immediate shutdown request
        WARNING:  terminating connection because of crash of another server process
        DETAIL:  The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
        HINT:  In a moment you should be able to reconnect to the database and repeat your command.
         done
        server stopped
        
         -bash-3.2$ ./pg_ctl start -D /u02/data1 
        server starting
        LOG:  
                ** EnterpriseDB Dynamic Tuning Agent ********************************************
                 *       System Utilization: 66 %                                               
        *        *         Database Version: 9.1.2.2                                             
        *        * Operating System Version:                                                     
        *        *     Number of Processors: 0                                                  
        *        *           Processor Type:                                                     
        *        *   Processor Architecture:                                                     
        *        *            Database Size: 0.1    GB                                           
        *        *                      RAM: 1.0    GB                                           
        *        *            Shared Memory: 1011   MB                                           
        *        *       Max DB Connections: 104                                                 
        *        *               Autovacuum: on                                                  
        *        *       Autovacuum Naptime: 60   Secnds                                        
        *        *            InfiniteCache: off                                                      
        *        *    InfiniteCache Servers: 0                                                   
        *        *       InfiniteCache Size: 0.000  GB                                           
        *       ***********************************************************************
        -bash-3.2$ LOG:  loaded library "$libdir/dbms_pipe"
        LOG:  loaded library "$libdir/edb_gen"
        LOG:  loaded library "$libdir/plugins/plugin_debugger"
        LOG:  loaded library "$libdir/plugins/plugin_spl_debugger"
        LOG:  database system was interrupted; last known up at 2012-11-14 12:10:38 IST
        LOG:  database system was not properly shut down; automatic recovery in progress
        LOG:  consistent recovery state reached at 0/20FA714
        LOG:  record with zero length at 0/20FA714
        LOG:  redo is not required
        LOG:  
                 ** EnterpriseDB Dynamic Tuning Agent ********************************************
                 *       System Utilization: 66 %                                                
        *        *         Database Version: 9.1.2.2                                             
        *        * Operating System Version:                                                     
        *        *     Number of Processors: 0                                                   
        *        *           Processor Type:                                                     
        *        *   Processor Architecture:                                                     
        *        *            Database Size: 0.1    GB                                           
        *        *                      RAM: 1.0    GB                                           
        *        *            Shared Memory: 1011   MB                                           
        *        *       Max DB Connections: 104                                                 
        *        *               Autovacuum: on                                                  
        *        *       Autovacuum Naptime: 60   Secnds                                        
        *        *            InfiniteCache: off                                                 
        *        *    InfiniteCache Servers: 0                                                   
        *        *       InfiniteCache Size: 0.000  GB                                           
        *        ****************************************************************************
        LOG:  autovacuum launcher started
        LOG:  database system is ready to accept connections
        3)First you need to understand how PostgreSQL handle log files, pg_xlog and archive log

        • pg_xlog is a PostgreSQL log file folder that use to store all data history records. It located at /u02/data1/pg_xlog.
        • when user inserted , update or delete a record, all transaction hisroty will automatically create or append to a file log file under pg_xlog folder.
        • Log file format is look like following format 000000010000000000000001 -> 000000010000000000000006
        • Every log file can handle around 16M data, when it excess this limit, it will automatically create a new log file, filename is follow 0-9 and A-Z
        000000010000000000000001
        ..
        ..
        000000010000000000000009
        ..
        ..
        00000001000000000000000A
        ..
        ..
        00000001000000000000000Z

        For example,

        [root@asmhost pg_xlog]# ls -lsh
        total 33M
         17M -rw------- 1 enterprisedb enterprisedb  16M Nov 14 12:28 000000010000000000000002
         17M -rw------- 1 enterprisedb enterprisedb  16M Nov  9 14:55 000000010000000000000003
        4.0K drwx------ 2 enterprisedb enterprisedb 4.0K Nov  9 12:20 archive_status

        This is the log files that we going to use as the roll-forward PostgreSQL Point-in-time Recovery .
        configure the WAL filepath in postgresql.conf file:-

        -bash-3.2$ ./psql -p 5445 sssdb
        
         sssdb=# show archive_command;
                     archive_command             
        -----------------------------------------
         cp %p /u02/ssslocation/pgpitr/walbkp/%f
        (1 row)
        
        sssdb=# show archive_mode;
          archive_mode 
         --------------
           on
        (1 row)
        • This means when pg_xlog folder grow to certain limitation, like 6 log files each contain 16M, when PostgreSQL try to insert a new history record and detected that pg_xlog is full, it will automatically archive the oldest history log file and move it to /u02/ssslocation/pgpitr/walbkp/ folder.
        • We have to backup these archive files continuously (that why it call incremental backup. We do not need to do full database backup anymore, but we do backup those archive log files constantly.

        Important log files folder

        [root@asmhost pg_xlog]# pwd
        /u02/data1/pg_xlog
        /u02/ssslocation/pgpitr/walbkp/
        /u02/ssslocation/pgpitr/databkp

        4)Data Simulation & Backup Process
        Create dummy tables and records – we will dynamic records in a new table, 1000k records will force PostgreSQL to create enough log files in pg_xlog folder and fire the archive process to archive the log files from /u02/data1/pg_xlog to /u02/ssslocation/pgpitr/walbkp, every logs file contain around 16M size file.

        sssdb=# create table test_1 as select * from pg_class;
        SELECT 456
         
        sssdb=# select * from current_timestamp;
                current_timestamp         
        ----------------------------------
         14-NOV-12 12:36:23.264212 +05:30
        (1 row)
        
        sssdb=# create table test_2 as select * from pg_description;
        SELECT 3534
        
        sssdb=# create table test_3 as select * from pg_description;
        SELECT 3534
        
        sssdb=# create table test_4(id number);
        CREATE TABLE
         
        sssdb=# select * from current_timestamp;
                current_timestamp         
        ----------------------------------
         14-NOV-12 12:46:00.076725 +05:30
        (1 row)
         
        sssdb=# insert into test_4 values(generate_series(100000,10000000));
        INSERT 0 990000
         
        sssdb=# select * from current_timestamp;
                current_timestamp         
        ----------------------------------
         14-NOV-12 12:51:42.144344 +05:30
        (1 row)
        Log files look like following
        [root@asmhost pg_xlog]# cd /u02/ssslocation/pgpitr/walbkp
        [root@asmhost walbkp]# ls
        000000010000000000000002  00000001000000000000000F  00000001000000000000001C
        000000010000000000000003  000000010000000000000010  00000001000000000000001D
        000000010000000000000004  000000010000000000000011  00000001000000000000001E
        000000010000000000000005  000000010000000000000012  00000001000000000000001F
        000000010000000000000006  000000010000000000000013  000000010000000000000020
        000000010000000000000007  000000010000000000000014  000000010000000000000021
        000000010000000000000008  000000010000000000000015  000000010000000000000022
        000000010000000000000009  000000010000000000000016  000000010000000000000023
        00000001000000000000000A  000000010000000000000017  000000010000000000000024
        00000001000000000000000B  000000010000000000000018  000000010000000000000025
        00000001000000000000000C  000000010000000000000019  000000010000000000000026
        00000001000000000000000D  00000001000000000000001A  000000010000000000000027
        00000001000000000000000E  00000001000000000000001B
        
        [root@asmhost walbkp]# pwd
        /u02/ssslocation/pgpitr/walbkp
         
        [root@asmhost walbkp]# ls
        000000010000000000000002  00000001000000000000000F  00000001000000000000001C
        000000010000000000000003  000000010000000000000010  00000001000000000000001D
        000000010000000000000004  000000010000000000000011  00000001000000000000001E
        000000010000000000000005  000000010000000000000012  00000001000000000000001F
        000000010000000000000006  000000010000000000000013  000000010000000000000020
        000000010000000000000007  000000010000000000000014  000000010000000000000021
        000000010000000000000008  000000010000000000000015  000000010000000000000022
        000000010000000000000009  000000010000000000000016  000000010000000000000023
        00000001000000000000000A  000000010000000000000017  000000010000000000000024
        00000001000000000000000B  000000010000000000000018  000000010000000000000025
        00000001000000000000000C  000000010000000000000019  000000010000000000000026
        00000001000000000000000D  00000001000000000000001A  000000010000000000000027
        00000001000000000000000E  00000001000000000000001B
        
        [root@asmhost walbkp]# cd /u02/data1/pg_xlog
        [root@asmhost pg_xlog]# pwd
        /u02/data1/pg_xlog
        
        [root@asmhost pg_xlog]# ls
        000000010000000000000002  000000010000000000000010  00000001000000000000001E
        000000010000000000000003  000000010000000000000011  00000001000000000000001F
        000000010000000000000004  000000010000000000000012  000000010000000000000020
        000000010000000000000005  000000010000000000000013  000000010000000000000021
        000000010000000000000006  000000010000000000000014  000000010000000000000022
        000000010000000000000007  000000010000000000000015  000000010000000000000023
        000000010000000000000008  000000010000000000000016  000000010000000000000024
        000000010000000000000009  000000010000000000000017  000000010000000000000025
        00000001000000000000000A  000000010000000000000018  000000010000000000000026
        00000001000000000000000B  000000010000000000000019  000000010000000000000027
        00000001000000000000000C  00000001000000000000001A  000000010000000000000028
        00000001000000000000000D  00000001000000000000001B  archive_status
        00000001000000000000000E  00000001000000000000001C
        00000001000000000000000F  00000001000000000000001D
        

        5)Create a full databse backup – base backup

        sssdb=# select pg_start_backup('Full backup');
         pg_start_backup 
        -----------------
         0/29000020
        (1 row)

        pg_start_backup is use to create a label, and log it into log file. (actually this is optional, good habit)
        Use a tar command to compress all pgDataPITR folder to make a database base backup.

        [root@asmhost databkp]# pwd
        /u02/ssslocation/pgpitr/databkp
        [root@asmhost databkp]# tar -cvzf basebkp.tar.gz /u02/data1/
        [root@asmhost databkp]# ls
        basebkp.tar.gz
        • basebkp.tar.gz this is the full database backup (base backup) including Postgresql configuration , system and all others files and folder.
        • pg_stop_backup() create a label in log file as well. (actually this is optional, good habit)
        sssdb=# select pg_stop_backup();
        NOTICE:  pg_stop_backup complete, all required WAL segments have been archived
         pg_stop_backup 
        ----------------
         0/29000144
        (1 row)

        6) prepare for Point-in-time Recovery  
        The pg_start_backup() and pg_stop_backup() backup labels will created in 000000010000000000000029.00000020.backup file. This is a good habit to make a label here.

        [root@asmhost pg_xlog]# pwd
        /u02/data1/pg_xlog
        
        [root@asmhost pg_xlog]# cat 000000010000000000000029.00000020.backup
        START WAL LOCATION: 0/29000020 (file 000000010000000000000029)
        STOP WAL LOCATION: 0/29000144 (file 000000010000000000000029)
        CHECKPOINT LOCATION: 0/29000050
        BACKUP METHOD: pg_start_backup
        START TIME: 2012-11-14 12:57:05 IST
        LABEL: Full backup
        STOP TIME: 2012-11-14 13:04:17 IST
        [root@asmhost pg_xlog]# ls -l|wc -l
        43
        
        [root@asmhost pg_xlog]# cd /u02/ssslocation/pgpitr/walbkp/
        [root@asmhost walbkp]# echo *.backup
        000000010000000000000029.00000020.backup
        
        [root@asmhost walbkp]# cat 000000010000000000000029.00000020.backup
        START WAL LOCATION: 0/29000020 (file 000000010000000000000029)
        STOP WAL LOCATION: 0/29000144 (file 000000010000000000000029)
        CHECKPOINT LOCATION: 0/29000050
        BACKUP METHOD: pg_start_backup
        START TIME: 2012-11-14 12:57:05 IST
        LABEL: Full backup
        STOP TIME: 2012-11-14 13:04:17 IST
        
        [root@asmhost walbkp]# ls -l|wc -l
        42

        7) Table test_5, test_6 created time notify–prepare for Point-in-time Recovery

        sssdb=# create table test_5(id number(8));
        CREATE TABLE
        sssdb=# insert into test_5 values(generate_series(1,1000000));
        INSERT 0 1000000
        sssdb=# insert into test_5 values(generate_series(1,1000000));
        INSERT 0 1000000
         
        sssdb=# select * from current_timestamp;
                current_timestamp         
        ----------------------------------
         14-NOV-12 13:13:54.047576 +05:30
        (1 row)
        sssdb=# create table test_6(id number(8));
        CREATE TABLE
        
        sssdb=# select * from current_timestamp;
                current_timestamp         
        ----------------------------------
         14-NOV-12 13:14:34.626699 +05:30
        (1 row)
        
        sssdb=# create table test_7(id number(8));
        CREATE TABLE
        sssdb=# insert into test_6 values(generate_series(1,1000000));
        INSERT 0 1000000
        sssdb=# insert into test_7 values(generate_series(1,10000000));
        INSERT 0 10000000
        [root@asmhost walbkp]# pwd
        /u02/ssslocation/pgpitr/walbkp
        [root@asmhost walbkp]# ls -l|wc -l
        91
        
        [root@asmhost walbkp]# cd /u02/data1/pg_xlog
        [root@asmhost pg_xlog]# ls -l|wc -l
        53

        Before move on, please study above transaction log files movement that generated by PostgreSQL. We have to fully understand when PostgreSQL will create a new log file and when it will move to archive folder, don’t forget the log file format 🙂 ~ take sometime to review and understand the above log file generation sequence

        sssdb=# select table_name, status from user_tables;
         table_name | status 
        ------------+--------
         TEST_1     | VALID # created time   14-NOV-12 12:36:23
         TEST_2     | VALID # created time   14-NOV-12 12:46:00
         TEST_3     | VALID # created time
         TEST_4     | VALID # created time   14-NOV-12 12:51:42
         TEST_5     | VALID     # created time   14-NOV-12 13:13:54
         TEST_6     | VALID # created time   14-NOV-12 13:14:34
         TEST_7     | VALID # created time   14-NOV-12 14:07:40
        (7 rows)

        8)Disaster come in
        We have to do something in order to make our PostgreSQL server go down.

        sssdb=# select * from current_timestamp;
                current_timestamp        
        ---------------------------------
         14-NOV-12 14:07:40.00999 +05:30
        (1 row)
        
        sssdb=# select * from current_timestamp;
                current_timestamp        
        ---------------------------------
         14-NOV-12 14:07:40.00999 +05:30
        (1 row)
        --Kill the postgresql process
        [root@asmhost ~]# ps -ef|grep data1
        506       6536     1  0 12:28 pts/1    00:00:00 /opt/PostgresPlus/9.1AS/bin/edb-postgres -D /u02/data1
        root      9542  6101  0 14:21 pts/3    00:00:00 grep data1
        
        [root@asmhost ~]# kill -9 $(head -1 /u02/data1/postmaster.pid)
        
        [root@asmhost ~]# ps -ef|grep data1
        root      9680  6101  0 14:24 pts/3    00:00:00 grep data1

        9)steps of Recovery Process 
        Finally we reach recovery process, Please remember 1 file and 2 folders

        • Base backup file /u02/ssslocation/pgpitr/databkp/basebkp.tar.gz
        •  Log files have not archive yet – /u02/data1/pg_xlog  (all files under Pg_xlog folder)
        •  WALs –  /u02/ssslocation/pgpitr/walbkp ( all archive files under  folder may be a remote storage in real environment)

        step1.Rename data1 to olddata1.bad.data, assume database file in data1 folder was damaged due to disaster we created just now, we need to create a fresh database later.

        [root@asmhost u02]# mv data1 olddata1.bad.data
        [root@asmhost u02]# ls
        admin  data            lost+found  spl_bkp  ssslocation  sssw.csv  tbs1
        app    olddata1.bad.data  oradata  sss.csv  sssnew.csv   tbs
        
        [root@asmhost u02]# pwd
        /u02
        [root@asmhost u02]# mkdir data1
        [root@asmhost u02]# ls -l|grep data1
        drwxr-xr-x  2 root         root          4096 Nov 14 14:38 data1
        drwx------ 14 enterprisedb enterprisedb  4096 Nov 14 13:04 data1.bad.data
        
        --change the owner permission
        [root@asmhost u02]# chown enterprisedb:enterprisedb data1/
        
        [root@asmhost u02]# ls -l|grep data1
        drwxr-xr-x  2 enterprisedb enterprisedb  4096 Nov 14 14:38 data1
        drwx------ 14 enterprisedb enterprisedb  4096 Nov 14 13:04 data1.bad.dat

        step2.Unzip / extract files basebkp.tar, create a new data1 folder under , it just like what we did before. Move all extracted files from current location to /u02/data1

        [root@asmhost pgpitr]# pwd
        /u02/ssslocation/pgpitr
        
        [root@asmhost pgpitr]# cd databkp/
        [root@asmhost databkp]# ls
        basebkp.tar.gz
        
        [root@asmhost databkp]# mv  /u02/data1

        step 3.Start database

        -bash-3.2$ ./pg_ctl start -D /u02/data1
        pg_ctl: another server might be running; trying to start server anyway
        server starting
        -bash-3.2$ LOG:  
                 ** EnterpriseDB Dynamic Tuning Agent ********************************************
                 *       System Utilization: 66 %                                                
        *        *         Database Version: 9.1.2.2                                             
        *        * Operating System Version:                                                     
        *        *     Number of Processors: 0                                                   
        *        *           Processor Type:                                                     
        *        *   Processor Architecture:                                                     
        *        *            Database Size: 0.1    GB                                           
        *        *                      RAM: 1.0    GB                                           
        *        *            Shared Memory: 1011   MB                                           
        *        *       Max DB Connections: 104                                                 
        *        *               Autovacuum: on                                                  
        *        *       Autovacuum Naptime: 60   Seconds                                        
        *        *            InfiniteCache: off                                                 
        *        *    InfiniteCache Servers: 0                                                   
        *        *       InfiniteCache Size: 0.000  GB                                           
        *        *****************************************************************************
        LOG:  loaded library "$libdir/dbms_pipe"
        LOG:  loaded library "$libdir/edb_gen"
        LOG:  loaded library "$libdir/plugins/plugin_debugger"
        LOG:  loaded library "$libdir/plugins/plugin_spl_debugger"
        LOG:  ** EnterpriseDB Dynamic Tuning Agent ********************************************
        
                 *       System Utilization: 66 %                                                
        *        *         Database Version: 9.1.2.2                                             
        *        * Operating System Version:                                                     
        *        *     Number of Processors: 0                                                  
        *        *           Processor Type:                                                     
        *        *   Processor Architecture:                                                     
        *        *            Database Size: 0.5    GB                                           
        *        *                      RAM: 1.0    GB                                           
        *        *            Shared Memory: 1011   MB                                           
        *        *       Max DB Connections: 104                                                 
        *        *               Autovacuum: on                                                  
        *        *       Autovacuum Naptime: 60   Seconds                                        
        *        *            InfiniteCache: off                                                 
        *        *    InfiniteCache Servers: 0                                                   
        *        *       InfiniteCache Size: 0.000  GB                                           
        *        *****************************************************************************
        LOG:  database system was interrupted; last known up at 2012-11-14 12:57:05 IST
        LOG:  database system was not properly shut down; automatic recovery in progress
        LOG:  consistent recovery state reached at 0/29000050
        LOG:  redo starts at 0/29000020
        LOG:  record with zero length at 0/290000A0
        LOG:  redo done at 0/29000050
        LOG:  autovacuum launcher started
        LOG:  database system is ready to accept connections
        
        -bash-3.2$ pwd
        /opt/PostgresPlus/9.1AS/bin
        -bash-3.2$ ./psql -p 5445 sssdb
        psql (9.1.2.2)
        Type "help" for help.
        sssdb=# select * from user_tables;
            owner     | schemaname | table_name | table_space | status
        --------------+------------+------------+-------------+--------
         ENTERPRISEDB | PUBLIC     | TEST_1     |             | VALID
         ENTERPRISEDB | PUBLIC     | TEST_2     |             | VALID
         ENTERPRISEDB | PUBLIC     | TEST_3     |             | VALID
         ENTERPRISEDB | PUBLIC     | TEST_4     |             | VALID
        (4 rows)

        untill Table test_4 is restored. This testPITR1 table is created before base backup process launched, so this is correct.
        step 4. Copy log files from pg_xlog folder. Some log files still located in pgDataPITR.bad.data pg_xlog folder (those log files hanv’t archive yet) during disaster happening, we need to copy the log file back and recover it as much as possible.

        [root@asmhost pg_xlog]# pwd
        /u02/olddata1.bad.data/pg_xlog
        [root@asmhost pg_xlog]# cp -R 000* /u02/data1/pg_xlog/

        step 5. Create a recovery.conf file and put it under /u02/data1/

        vi recovery.conf
        restore_command = 'cp /u02/ssslocation/pgpitr/walbkp/%f %p'
        recovery_target_time = '14-NOV-12 13:14:34'

        This is the final process and most critical backup process

        •  /usr/local/pgsql/pgbackup/wals/ is the folder that we backup our archive log files
        •  recovery_target_time is the time we need to recover to. Omit this setting will make PostgreSQL recover as much as it can, it may recover all changes.
        -bash-3.2$ ./pg_ctl stop -D /u02/data1 -m i
        LOG:  received immediate shutdown request
        WARNING:  terminating connection because of crash of another server process
        DETAIL:  The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
        HINT:  In a moment you should be able to reconnect to the database and repeat your command.
        waiting for server to shut down... done
        server stopped
        -bash-3.2$ ./pg_ctl start -D /u02/data1 
        server starting
        -bash-3.2$ LOG:  
         
                ** EnterpriseDB Dynamic Tuning Agent ********************************************
                 *       System Utilization: 66 %                                                
        *        *         Database Version: 9.1.2.2                                            
        *        * Operating System Version:                                                     
        *        *     Number of Processors: 0                                                   
        *        *           Processor Type:                                                     
        *        *   Processor Architecture:                                                     
        *        *            Database Size: 0.1    GB                                           
        *        *                      RAM: 1.0    GB                                           
        *        *            Shared Memory: 1011   MB                                           
        *        *       Max DB Connections: 104                                                 
        *        *               Autovacuum: on                                                  
        *        *       Autovacuum Naptime: 60   Seconds                                        
        *        *            InfiniteCache: off                                                 
        *        *    InfiniteCache Servers: 0                                                   
        *        *       InfiniteCache Size: 0.000  GB                                           
        *        ********************************************************************************
        LOG:  loaded library "$libdir/dbms_pipe"
        LOG:  loaded library "$libdir/edb_gen"
        LOG:  loaded library "$libdir/plugins/plugin_debugger"
        LOG:  loaded library "$libdir/plugins/plugin_spl_debugger"
        LOG:  database system was interrupted; last known up at 2012-11-14 14:52:03 IST
        LOG:          ** EnterpriseDB Dynamic Tuning Agent ********************************************
                 *       System Utilization: 66 %                                                
        *        *         Database Version: 9.1.2.2                                             
        *        * Operating System Version:                                                     
        *        *     Number of Processors: 0                                                   
        *        *           Processor Type:                                                     
        *        *   Processor Architecture:                                                     
        *        *            Database Size: 0.5    GB                                           
        *        *                      RAM: 1.0    GB                                           
        *        *            Shared Memory: 1011   MB                                           
        *        *       Max DB Connections: 104                                                 
        *        *               Autovacuum: on                                                  
        *        *       Autovacuum Naptime: 60   Secnds                                        
        *        *            InfiniteCache: off                                                 
        *        *    InfiniteCache Servers: 0                                                   
        *        *       InfiniteCache Size: 0.000  GB                                           
        *        ********************************************************************************
        LOG:  starting point-in-time recovery to 2012-11-14 13:14:34+05:30
        LOG:  restored log file "000000010000000000000029" from archive
        LOG:  invalid resource manager ID in primary checkpoint record
        LOG:  using previous checkpoint record at 0/29000050
        LOG:  consistent recovery state reached at 0/290000A0
        LOG:  redo starts at 0/29000020
        LOG:  restored log file "00000001000000000000002A" from archive
        LOG:  restored log file "00000001000000000000002B" from archive
        LOG:  restored log file "00000001000000000000002C" from archive
        LOG:  restored log file "00000001000000000000002D" from archive
        LOG:  restored log file "00000001000000000000002E" from archive
        LOG:  restored log file "00000001000000000000002F" from archive
        LOG:  restored log file "000000010000000000000030" from archive
        LOG:  restored log file "000000010000000000000031" from archive
        LOG:  recovery stopping before commit of transaction 2237, time 2012-11-14 13:14:38.881279+05:30
        LOG:  redo done at 0/31A76348
        LOG:  last completed transaction was at log time 2012-11-14 13:14:17.399818+05:30
        cp: cannot stat `/u02/ssslocation/pgpitr/walbkp/00000002.history': No such file or directory
        LOG:  selected new timeline ID: 2
        cp: cannot stat `/u02/ssslocation/pgpitr/walbkp/00000001.history': No such file or directory
        LOG:  archive recovery complete
        LOG:  database system is ready to accept connections
        LOG:  autovacuum launcher started

        Above recovery.conf file will make PostgreSQL take the archive log files from /usr/local/pgsql/pgbackup/wals/ folder and recover the data changes until  14-NOV-12 13:14:34 (table test_6 created).

        -bash-3.2$ ./psql -p 5445 sssdb
        psql (9.1.2.2)
        Type "help" for help.
        
        sssdb=# select * from user_tables;
            owner     | schemaname | table_name | table_space | status 
        --------------+------------+------------+-------------+--------
         ENTERPRISEDB | PUBLIC     | TEST_1     |             | VALID
         ENTERPRISEDB | PUBLIC     | TEST_2     |             | VALID
         ENTERPRISEDB | PUBLIC     | TEST_3     |             | VALID
         ENTERPRISEDB | PUBLIC     | TEST_4     |             | VALID
         ENTERPRISEDB | PUBLIC     | TEST_5     |             | VALID
         ENTERPRISEDB | PUBLIC     | TEST_6     |             | VALID                             
        (6 rows)
        • Table testpitr2 is restored back.
        • P.S After recovery process finished, recovery.conf will rename to recovery.done by PostgreSQL to avoid start the recovery process again.
        • We can the view pg.log file to understand how PostgreSQL process the recovery process.
        • THIS IS ONE TIME PROCESS, after recovery process started and finished, we cant make any recovery changes (like roll forward to another time).

        If we want to roll forward to another restore time, we need to start whole recovery process again, like extract files from base backup and copy log files. This is because after PostgreSQL recovered the data , all log files format will changed to other format like following

        [root@asmhost data1]# cat recovery.done 
        restore_command = 'cp /u02/ssslocation/pgpitr/walbkp/%f %p'
        recovery_target_time = '14-NOV-12 13:14:34'
        [root@asmhost data1]#
        • After recovered, log file number will increased
        • 00000001 –> 00000002 –> 00000003
        • If we want to restore table test_6 created at 14-NOV-12 13:14:34, we are unable to do it, it will output error in log file unless we start the whole recovery process again.
        • This archive log files transaction backup and restore mechanism is implemented in many enterprise database like Oracle.
        Prev PostgreSQL Archivelog
        Next Taking Backup On Postgres Slave (Standby) Server

        ADVERTISEMENT

        Latest Courses

        PostgreSQL Database

        PostgreSQL Database

        $600.00 $500.00
        Greenplum Database

        Greenplum Database

        $2,000.00 $1,500.00

        Preview Course

        Free

        Latest Posts

        PostgreSQL Patching version 9, 10,11
        10Oct2019
        Tools for PostgreSQL
        16Sep2019
        Postgres user creation and restrict DDL & database access
        13Sep2019

        Recent Forum Topics

        • thought behind whiteboard activity
        • Are you going to take your first ste
        • How to start working on an application?
        • please let me know pre requirements to increase work_mem
        • how to copy some data in one table to another table in postgres

        2ndquadrant.in

        (+91) 8838953252

        ITsupport@rayafeel.com

        Company

        • About Us
        • Contact
        • Our Team
        • Blog

        COURSES

        • List Of Course
        • Become An Instructor
        • Events
        • Postgres Support Blog

        Support

        • DBA Support
        • Consultancy Services
        • Postgres Migration Blogs
        • Forum

        Recommend

        • Groups
        • Login
        • FAQs
        • SignUp

        IT Services by rayafeel.com. Powered by Rayafeel Technologies Pvt Ltd.

        • Privacy
        • Terms

        Become An Instructor?

        Join thousand of instructors and earn money hassle free!

        Get Started Now

        Login with:

        Login with Google Login with Twitter Login with LinkedIn Login with Microsoft

        Login with your site account

        Lost your password?

        Not a member yet? Register now

        Register a new account

        Are you a member? Login now

        Modal title

        Message modal