• Home
  • Services
    • DBA Support
    • DBA Consultancy Services
    • PostgreSQL Support
    • Website Maintenance
  • Courses

    About Courses

    • List Of Courses
    • Become an Instructor
    Greenplum Database

    Greenplum Database

    $2,000.00 $1,500.00
    Read More
  • Company
    • FAQs
    • About Us
    • Contact
  • Events
  • Portfolio
  • Blogs
    • Blog – RayaFeeL
    • Blog – PostgreSQL Support
    • Blog – PostgreSQL Migration
    • Blog – All DB’s
    • Blog – Linux
    • Blog – Medical Coding
      • Cart

        0

    Have any question?
    (+91)8838953252
    ITsupport@rayafeel.com
    RegisterLogin
    RayaFeeL
    • Home
    • Services
      • DBA Support
      • DBA Consultancy Services
      • PostgreSQL Support
      • Website Maintenance
    • Courses

      About Courses

      • List Of Courses
      • Become an Instructor
      Greenplum Database

      Greenplum Database

      $2,000.00 $1,500.00
      Read More
    • Company
      • FAQs
      • About Us
      • Contact
    • Events
    • Portfolio
    • Blogs
      • Blog – RayaFeeL
      • Blog – PostgreSQL Support
      • Blog – PostgreSQL Migration
      • Blog – All DB’s
      • Blog – Linux
      • Blog – Medical Coding
        • Cart

          0

      Databases

      • Home
      • All courses
      • Databases
      • Greenplum Database
      CoursesIT & SoftwareDatabasesGreenplum Database
      • Greenplum Tutorial
        9
        • Lecture1.1
          Download the Greenplum Sandbox VM 30 min
        • Lecture1.2
          Greenplum Database Architecture 30 min
        • Lecture1.3
          Greenplum Configuration Files ( pg_hba.conf , postgresql.conf ) 30 min
        • Lecture1.4
          Greenplum Database Installation 30 min
        • Lecture1.5
          Greenplum psql Command Line Client 30 min
        • Lecture1.6
          Greenplum Create Users and Roles 30 min
        • Lecture1.7
          Greenplum Create and Prepare Database 30 min
        • Lecture1.8
          Greenplum Create Tables 30 min
        • Lecture1.9
          Greenplum Data Loading 30 min
      • Greenplum Performance Tuning
        7
        • Lecture2.1
          Greenplum Performance Tuning Overview 30 min
        • Lecture2.2
          Greenplum Analyze the tables 30 min
        • Lecture2.3
          Greenplum explain plans 30 min
        • Lecture2.4
          Greenplum Indexes and performance 30 min
        • Lecture2.5
          Greenplum Row vs. column orientation 30 min
        • Lecture2.6
          Greenplum – Check for even data distribution on segments 30 min
        • Lecture2.7
          Greenplum Partitioning 30 min
      • Greenplum Database Analytics
        1
        • Lecture3.1
          Greenplum Database analytics 30 min
      • Greenplum Backup and Recovery Operations
        2
        • Lecture4.1
          Greenplum gpcrondump 30 min
        • Lecture4.2
          Greenplum gpdbrestore 30 min
      • Importing GPDB Sandbox
        1
        • Lecture5.1
          Importing GPDB Sandbox into VMware Fusion 30 min
      • Greenplum Database Books
        1
        • Lecture6.1
          Data Warehousing with Greenplum PDF 30 min
      • Basic Questions and Answer of Greenplum Database
        1
        • Lecture7.1
          Some basic Questions and answer about Greenplum database 30 min

        Greenplum explain plans

        View explain plans

        An explain plan explains the method the optimizer has chosen to produce a result set. Depending on the query, there can be a variety of methods to produce a result set. The optimizer calculates the cost for each method and chooses the one with the lowest cost. In large queries, cost is generally measured by the amount of I/O to be performed.

        An explain plan does not do any actual query processing work. Explain plans use statistics generated by the ANALYZE command, so plans generated before and after running ANALYZE can be quite different. This is especially true for queries with multiple joins, because the order of the joins can have a tremendous impact on performance.

        In the following exercise, you will generate some small tables that you can query and view some explain plans.

        1. Enable timing so that you can see the effects of different performance tuning measures.
          tutorial=# \timing on
        2. View the create_sample_table.sql script, and then run it.
          tutorial=# \i create_sample_table.sql
          
          DROP TABLE
          Time: 15.901 ms
          SET
          Time: 3.174 ms
          psql:create_sample_table.sql:3: NOTICE:  CREATE TABLE will create implicit
          sequence "sample_id_seq" for serial column "sample.id"
          CREATE TABLE
          Time: 24.421 ms
          INSERT 0 1000000
          Time: 14624.516 ms
          UPDATE 1000000
          Time: 1241.156 ms
          UPDATE 50000
          Time: 190.210 ms
          UPDATE 1000000
          Time: 1111.454 ms
        3. Request the explain plan for the COUNT() aggregate.
          tutorial=# EXPLAIN SELECT COUNT(*) FROM sample WHERE id > 100;
                                         QUERY PLAN
          ---------------------------------------------------------------------------
          Aggregate  (cost=0.00..462.77 rows=1 width=8)
           ->  Gather Motion 2:1  (slice1; segments: 2)  (cost=0.00..462.77
            rows=1 width=8)
             ->  Aggregate  (cost=0.00..462.76 rows=1 width=8)
               ->  Table Scan on sample  (cost=0.00..462.76 rows=500687 width=1)
                        Filter: id > 100
          Settings:  optimizer=on
          Optimizer status: PQO version 1.597
          (7 rows)

          Query plans are read from bottom to top. In this example, there are four steps. First there is a sequential scan on each segment server to access the rows. Then there is an aggregation on each segment server to produce a count of the number of rows from that segment. Then there is a gathering of the count value to a single location. Finally, the counts from each segment are aggregated to produce the final result.

          The cost number on each step has a start and stop value. For the sequential scan, this begins at time zero and goes until 13863.80. This is a fictional number created by the optimizer—it is not a number of seconds or I/O operations.

          The cost numbers are cumulative, so the cost for the second operation includes the cost for the first operation. Notice that nearly all the time to process this query is in the sequential scan.

        4. The EXPLAIN ANALYZE command actually runs the query (without returning te result set). The cost numbers reflect the actual timings. It also produces some memory and I/O statistics.
          tutorial=# EXPLAIN ANALYZE SELECT COUNT(*) FROM sample WHERE id > 100;
          
                                       QUERY PLAN
          -----------------------------------------------------------------------------
           Aggregate  (cost=0.00..462.77 rows=1 width=8)
             Rows out:  1 rows with 446 ms to end, start offset by 7.846 ms.
             ->  Gather Motion 2:1  (slice1; segments: 2)  (cost=0.00..462.77
          rows=1 width=8)
                   Rows out:  2 rows at destination with 443 ms to first row,
          446 ms to end, start offset by 7.860 ms.
                   ->  Aggregate  (cost=0.00..462.76 rows=1 width=8)
                         Rows out:  Avg 1.0 rows x 2 workers.  Max 1 rows (seg0)
          with 442 ms to end, start offset by 9.000 ms.
                         ->  Table Scan on sample  (cost=0.00..462.76 rows=500687
          width=1)
                               Filter: id > 100
                               Rows out: Avg 499950.0 rows x 2 workers. Max 499951 rows
          (seg0) with 88 ms to first row, 169 ms to end, start offset by 9.007 ms.
           Slice statistics:
             (slice0)    Executor memory: 159K bytes.
             (slice1)    Executor memory: 177K bytes avg x 2 workers, 177K bytes max (seg0).
           Statement statistics:
             Memory used: 128000K bytes
           Settings:  optimizer=on
           Optimizer status: PQO version 1.597
           Total runtime: 453.855 ms
          (17 rows)

        Changing optimizers

        By default, the sandbox instance disables the Pivotal Query Optimizer and you may see “legacy query optimizer” listed in the EXPLAIN output under “Optimizer status.”

        1. Check whether the Pivotal Query Optimizer is enabled.
          $ gpconfig -s optimizer
          
          Values on all segments are consistent
          GUC          : optimizer
          Master  value: off
          Segment value: off
        2. Enable the Pivotal Query Optimizer
          $ gpconfig -c optimizer -v on --masteronly
           20151201:09:08:31:172854 gpconfig:gpdb-sandbox:gpadmin-[INFO]:-completed successfully
        3. Reload the configuration on master and segment instances.
          $ gpstop -u
          
          20151201:09:08:49:172949 gpstop:gpdb-sandbox:gpadmin-[INFO]:-Starting gpstop with args: -u
          20151201:09:08:49:172949 gpstop:gpdb-sandbox:gpadmin-[INFO]:-Gathering information and validating the environment...
          20151201:09:08:49:172949 gpstop:gpdb-sandbox:gpadmin-[INFO]:-Obtaining Greenplum Master catalog information
          20151201:09:08:49:172949 gpstop:gpdb-sandbox:gpadmin-[INFO]:-Obtaining Segment details from master...
          20151201:09:08:49:172949 gpstop:gpdb-sandbox:gpadmin-[INFO]:-Greenplum Version: 'postgres (Greenplum Database) 4.3.12.0'
          20151201:09:08:49:172949 gpstop:gpdb-sandbox:gpadmin-[INFO]:-Signalling all postmaster processes to reload
          .

        Prev Greenplum Analyze the tables
        Next Greenplum Indexes and performance

        ADVERTISEMENT

        Latest Courses

        PostgreSQL Database

        PostgreSQL Database

        $600.00 $500.00
        Greenplum Database

        Greenplum Database

        $2,000.00 $1,500.00

        Preview Course

        Free

        Latest Posts

        PostgreSQL Patching version 9, 10,11
        10Oct2019
        Tools for PostgreSQL
        16Sep2019
        Postgres user creation and restrict DDL & database access
        13Sep2019

        Recent Forum Topics

        • thought behind whiteboard activity
        • Are you going to take your first ste
        • How to start working on an application?
        • please let me know pre requirements to increase work_mem
        • how to copy some data in one table to another table in postgres

        2ndquadrant.in

        (+91) 8838953252

        ITsupport@rayafeel.com

        Company

        • About Us
        • Contact
        • Our Team
        • Blog

        COURSES

        • List Of Course
        • Become An Instructor
        • Events
        • Postgres Support Blog

        Support

        • DBA Support
        • Consultancy Services
        • Postgres Migration Blogs
        • Forum

        Recommend

        • Groups
        • Login
        • FAQs
        • SignUp

        IT Services by rayafeel.com. Powered by Rayafeel Technologies Pvt Ltd.

        • Privacy
        • Terms

        Become An Instructor?

        Join thousand of instructors and earn money hassle free!

        Get Started Now

        Login with:

        Login with Google Login with Twitter Login with LinkedIn Login with Microsoft

        Login with your site account

        Lost your password?

        Not a member yet? Register now

        Register a new account

        Are you a member? Login now

        Modal title

        Message modal