Exploiting SQL/JSON Enhancements for Modern Workloads in PostgreSQL 16

The latest iteration of PostgreSQL, version 16, brings a suite of enhancements that bolster its capabilities with JSON data. These improvements not only align PostgreSQL more closely with the SQL/JSON standard but also offer significant optimizations that streamline and enhance working with JSON data. This evolution is a testament to PostgreSQL’s commitment to meeting the demands of modern, data-driven applications that leverage semi-structured data for a myriad of use cases.

In this post, we’ll delve into the SQL/JSON enhancements introduced in PostgreSQL 16, illustrate their application with code examples, and explore real-world scenarios where these enhancements can be particularly beneficial.

To fully leverage the SQL code examples mentioned in the blog post, let’s create a sample table schema (DDL) and insert some sample data for two tables, products and company_data. This setup will allow you to experiment with the new SQL/JSON enhancements in PostgreSQL 16 firsthand.

Table products

This table will store product details, including a unique identifier, the product name, attributes in JSON format, and additional product information in JSON format.

DDL for products:

CREATE TABLE products (
product_id SERIAL PRIMARY KEY,
product_name TEXT NOT NULL,
product_attributes JSONB,
product_info JSONB
);

Sample Data for products:

INSERT INTO products (product_name, product_attributes, product_info) VALUES
('Laptop', '{"color": "#C0FF33", "memory": "16GB"}', '{"specifications": {"color": "#FF5733", "processor": "Intel i7", "storage": "512GB SSD"}}'),
('Smartphone', '{"color": "#FF5733", "memory": "8GB"}', '{"specifications": {"color": "#FF5733", "processor": "Snapdragon 888", "storage": "256GB UFS 3.1"}}');

Table company_data

This table will simulate employee data within a company, including an employee ID, their name, and their department.

DDL for company_data:

CREATE TABLE company_data (
employee_id SERIAL PRIMARY KEY,
employee_name TEXT NOT NULL,
department TEXT NOT NULL
);

Sample Data for company_data:

INSERT INTO company_data (employee_name, department) VALUES
('Alice Johnson', 'Engineering'),
('Bob Smith', 'Engineering'),
('Catherine Lee', 'Product'),
('David Brown', 'Sales');

Key PostgreSQL 16 SQL/JSON Improvements

Enhanced Numeric Literals in SQL/JSON Paths

PostgreSQL 16 introduces more readable path expressions by supporting hexadecimal, octal, and binary integers, along with the use of underscores in large numbers to improve readability. This feature is a boon for developers working with complex JSON data structures, allowing for clearer and more intuitive queries.

Example:

-- Using hexadecimal numeric literals for path expressions
SELECT product_info -> 'specifications' -> 'color' as color_code
FROM products
WHERE product_info -> 'specifications' ->> 'color' = '#FF5733';
color_code
------------
"#FF5733"
"#FF5733"
(2 rows)

New SQL/JSON Constructors

Creating JSON objects and arrays directly within SQL queries is now more straightforward with the introduction of JSON_OBJECT, JSON_ARRAY, JSON_OBJECT_AGG, and JSON_ARRAY_AGG. These constructors simplify the process of generating JSON data from relational data, enabling more efficient data transformation and aggregation.

Example:

SELECT department, JSON_OBJECT(employee_id::TEXT: employee_name) AS employees
FROM company_data;
department | employees
-------------+-------------------------
Engineering | {"1" : "Alice Johnson"}
Engineering | {"2" : "Bob Smith"}
Product | {"3" : "Catherine Lee"}
Sales | {"4" : "David Brown"}
(4 rows)

SELECT department, JSON_OBJECTAGG(employee_id::TEXT : employee_name) AS employees
FROM company_data
GROUP BY department;
department | employees
-------------+----------------------------------------------
Product | { "3" : "Catherine Lee" }
Engineering | { "1" : "Alice Johnson", "2" : "Bob Smith" }
Sales | { "4" : "David Brown" }
(3 rows)

SELECT department, JSON_ARRAY(employee_name) AS employees
FROM company_data;
department | employees
-------------+-------------------
Engineering | ["Alice Johnson"]
Engineering | ["Bob Smith"]
Product | ["Catherine Lee"]
Sales | ["David Brown"]
(4 rows)

SELECT department, JSON_ARRAYAGG(employee_name) AS employees
FROM company_data GROUP BY department;
department | employees
-------------+--------------------------------
Product | ["Catherine Lee"]
Engineering | ["Alice Johnson", "Bob Smith"]
Sales | ["David Brown"]
(3 rows)

SQL/JSON Object Checks

The addition of IS JSON predicates improves data integrity and validation by allowing queries to check if a column’s data adheres to expected JSON structures. This feature is particularly useful for ensuring data quality and consistency.

Example:

SELECT * 
FROM products
WHERE product_info IS JSON OBJECT;

product_id | product_name | product_attributes | product_info
------------+--------------+----------------------------------------+-----------------------------------------------------------------------------------------------------
1 | Laptop | {"color": "#C0FF33", "memory": "16GB"} | {"specifications": {"color": "#FF5733", "storage": "512GB SSD", "processor": "Intel i7"}}
2 | Smartphone | {"color": "#FF5733", "memory": "8GB"} | {"specifications": {"color": "#FF5733", "storage": "256GB UFS 3.1", "processor": "Snapdragon 888"}}
(2 rows)

SELECT employees IS JSON ARRAY AS json_object FROM (SELECT department, JSON_ARRAYAGG(employee_name) AS employees
FROM company_data GROUP BY department);
json_object
-------------
t
t
t
(3 rows)

Vectorized JSON Parsing

Vector operations for JSON parsing in PostgreSQL 16 lead to significant performance gains, especially when processing large JSON documents. This enhancement underscores PostgreSQL’s capability to efficiently handle modern workloads that involve heavy JSON data manipulation.

The below snapshot demonstrates the performance difference between version 15 and 16, which is 43% faster.

PostgreSQL 15:
--------------
[postgres@ip-20-0-10-44 data]$ /usr/pgsql-15/bin/psql -p 5434
psql (15.6)
Type "help" for help.

postgres=# CREATE TABLE long_json_as_text AS
postgres-# with long as (
postgres(# select repeat(description, 10) from pg_description pd
postgres(# )
postgres-# SELECT (select json_agg(row_to_json(long)) as t from long) from
postgres-# generate_series(1, 100);
SELECT 100
postgres=# VACUUM FREEZE long_json_as_text;
VACUUM
postgres=# \timing
Timing is on.
postgres=# SELECT 1 FROM long_json_as_text WHERE jsonb_typeof(t::jsonb) = 'not me';
?column?
----------
(0 rows)

Time: 1003.056 ms (00:01.003)

PostgreSQL 16
--------------
postgres=# SELECT version();
version
---------------------------------------------------------------------------------------------------------
PostgreSQL 16.2 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 8.5.0 20210514 (Red Hat 8.5.0-20), 64-bit
(1 row)
postgres=# CREATE TABLE long_json_as_text AS
postgres-# with long as (
postgres(# select repeat(description, 10) from pg_description pd
postgres(# )
postgres-# SELECT (select json_agg(row_to_json(long)) as t from long) from
postgres-# generate_series(1, 100);
SELECT 100
postgres=# VACUUM FREEZE long_json_as_text;
VACUUM
postgres=# \timing
Timing is on.
postgres=# SELECT 1 FROM long_json_as_text WHERE jsonb_typeof(t::jsonb) = 'not me';
?column?
----------
(0 rows)

Time: 437.378 ms

Real-World Scenarios

Scenario 1: Data Validation and Cleanup

With the IS JSON checks, companies can ensure that data ingested from various sources into their PostgreSQL databases meets the expected JSON format. This is crucial for data quality and downstream processing, especially in ETL pipelines and data integration scenarios.

Scenario 2: Reporting and Analytics

The new JSON aggregation functions allow for dynamic creation of JSON objects and arrays directly within SQL queries, facilitating complex reporting and analytics tasks. For instance, aggregating customer feedback stored in JSON format by categories and generating reports directly from the database.

Scenario 3: Configuration and Preference Data

Applications often store user preferences or configuration settings in JSON format. The enhanced numeric literals and direct JSON construction capabilities make it easier to query and update these settings, improving application responsiveness and user experience.

Closing Thoughts

PostgreSQL 16’s SQL/JSON enhancements significantly improve the efficiency and performance of working with JSON data. These features enable developers to write more concise, readable, and efficient queries, leading to faster application development and better data management. As semi-structured data continues to play a critical role in application development, PostgreSQL reaffirms its position as a leading database choice for modern, data-intensive applications.

Whether you’re developing new applications or looking to migrate existing workloads, PostgreSQL 16 offers compelling features that can help streamline your operations and leverage JSON data more effectively. As the database continues to evolve, it’s clear that PostgreSQL is well-equipped to meet the challenges of today’s—and tomorrow’s—data landscape.

Enhanced Developer Experience in EPAS 16 with Oracle Compatibility

Introduction

EDB (EnterpriseDB) has always been at the forefront of enhancing PostgreSQL for enterprise needs. With the release of EPAS 16, they have taken a significant leap in ensuring Oracle compatibility and improving the developer experience. In this post, we’ll delve into four key features – DBTIMEZONE, SESSIONTIMEZONE, TO_TIMESTAMP_TZ, and DBMS_UTILITY.EXPAND_SQL_TEXT – that stand out in EPAS 16, offering a more seamless transition for Oracle developers and a more intuitive environment for all users.

1. DBTIMEZONE Function


The DBTIMEZONE function in EPAS 16 is a welcomed addition for developers migrating from Oracle. It returns the time zone of the database, a crucial element for applications dealing with multiple time zones.

Example

SELECT DBTIMEZONE FROM DUAL;
dbtimezone
------------
+00:00
(1 row)

This query returns the database’s time zone, simplifying time zone management in global applications.

2. SESSIONTIMEZONE Function:

Similarly, SESSIONTIMEZONE is another function that enhances the EPAS environment by returning the session’s current time zone setting. This is particularly useful in user-specific time zone configurations.

Example:

SELECT SESSIONTIMEZONE FROM dual;
sessiontimezone
-----------------
UTC
(1 row)

This shows the current session’s time zone, aiding developers in debugging and designing user-centric applications.

3. TO_TIMESTAMP_TZ Function

EPAS 16 simplifies the TO_TIMESTAMP_TZ function by allowing it to accept just one argument. This makes converting string data to timestamp with time zone much more straightforward.

Example:

SELECT TO_TIMESTAMP_TZ('20-MAR-20 04:30:00.123456 PM +03:00') FROM DUAL;
to_timestamp_tz
-------------------------------
2020-03-20 13:30:00.123456+00
(1 row)

This converts the provided string into a timestamp with the time zone.

4. DBMS_UTILITY.EXPAND_SQL_TEXT subprogram:

EPAS 16 introduces the DBMS_UTILITY.EXPAND_SQL_TEXT procedure, a powerful tool for developers. It provides a way to analyze and expand SQL texts for better understanding and optimization.

Example

Below is an example in DBMS_UTILITY.EXPAND_SQL_TEXT expanded normal SQL to show the exact table behind the view.

SELECT pg_get_viewdef('public.salesemp');
pg_get_viewdef
-------------------------------------------
SELECT empno, +
ename, +
hiredate, +
sal, +
comm +
FROM emp +
WHERE ((job)::text = 'SALESMAN'::text);


edb=# DECLARE
out_sql TEXT;
BEGIN
DBMS_UTILITY.EXPAND_SQL_TEXT(input_sql_text => 'SELECT * FROM salesemp', output_sql_text => out_sql);
DBMS_OUTPUT.PUT_LINE(out_sql);
END;

SELECT empno,
ename,
hiredate,
sal,
comm
FROM ( SELECT emp.empno,
emp.ename,
emp.hiredate,
emp.sal,
emp.comm
FROM emp
WHERE ((emp.job)::text = 'SALESMAN'::text)) salesemp

EDB-SPL Procedure successfully completed

Conclusion:


EPAS 16 is making strides in bridging the gap between Oracle and PostgreSQL, particularly in terms of compatibility and ease of development. These four features not only ease the transition for Oracle developers but also enhance the overall developer experience in PostgreSQL. With these advancements, EPAS continues to strengthen its position as a robust, enterprise-level database solution, catering to a diverse range of development needs.
As EPAS evolves, it’s exciting to think about what further enhancements and capabilities future releases will bring to the world of database management and development.

Unlocking the Power of EPAS 16: Exploring New Advanced Features with Examples

The release of EDB Postgres Advanced Server (EPAS) 16 brings a suite of exciting new features, each designed to enhance the functionality and efficiency of database operations. In this blog, we’ll explore five notable additions, providing working code examples and discussing the benefits of each.

1. FETCH cursor BULK COLLECT INTO

Feature: The FETCH cursor BULK COLLECT INTO feature allows for the efficient fetching of multiple rows into a collection in a single operation.

Example:

DECLARE
TYPE EmpList IS TABLE OF employee%ROWTYPE INDEX BY BINARY_INTEGER;
emp_collection EmpList;
CURSOR emp_cursor IS SELECT ROWID, * FROM employee;
BEGIN
OPEN emp_cursor;
FETCH emp_cursor BULK COLLECT INTO emp_collection LIMIT 3 ;
CLOSE emp_cursor;

-- Process the collection
FOR i IN 1..emp_collection.COUNT LOOP
DBMS_OUTPUT.PUT_LINE('Employee Name: ' || emp_collection(i).name);
END LOOP;
END;

Employee Name: Employee_1
Employee Name: Employee_2
Employee Name: Employee_3

EDB-SPL Procedure successfully completed

Benefits:

  1. Efficiency in Data Retrieval: Fetches large datasets quickly.
  2. Reduced Server Load: Minimizes the number of round-trips between client and server.

2. NANVL, LNNVL, and DUMP

Features:

  • NANVL: Replaces NaN (not-a-number) values with a specified value.
  • LNNVL: Provides a logical negation of a condition, particularly useful with NULLs.
  • DUMP: Returns a VARCHAR2 value containing the datatype code, length in bytes, and internal representation of a value.

Examples:

SELECT NANVL(comm, 0) FROM emp WHERE empno=7839;
nanvl
-------
0
(1 row)

SELECT * FROM emp WHERE LNNVL(sal > 1000);
empno | ename | job | mgr | hiredate | sal | comm | deptno
-------+-------+-------+------+--------------------+--------+------+--------
7369 | SMITH | CLERK | 7902 | 17-DEC-80 00:00:00 | 800.00 | | 20
7900 | JAMES | CLERK | 7698 | 03-DEC-81 00:00:00 | 950.00 | | 30
(2 rows)

SELECT DUMP('Hello, World!') FROM dual;
dump
---------------------------------------------------------------
Typ=25 Len=13: 72,101,108,108,111,44,32,87,111,114,108,100,33
(1 row)

Benefits: These functions enhance data handling capabilities, especially in complex analytical queries. NANVL and LNNVL improve the handling of special or null values, while DUMP aids in debugging and understanding data representation.

3. Push Down Aggregate (Upper Relation) Operations

Feature: This optimization feature allows aggregate operations to be pushed down to a lower level in the query plan.

Example:

EXPLAIN ANALYZE SELECT SUM(id) FROM employee@xe_oracle;
QUERY PLAN
---------------------------------------------------------------------------------------------
Foreign Scan (cost=15.00..25.00 rows=1 width=32) (actual time=4.503..4.549 rows=1 loops=1)
Relations: Aggregate on (_dblink_employee_2 employee)
Remote Query: SELECT sum(id) FROM employee
Planning Time: 3.580 ms
Execution Time: 5.091 ms
(5 rows)

Benefits: The push-down optimization can lead to significant performance improvements by reducing the amount of data that needs to be processed and transferred within the database engine.

4. NLS_CHARSET Functions

NLS_CHARSET provides functions to determine the character set used by the database, crucial for handling multilingual data.

Example:

SELECT NLS_CHARSET_NAME(6);
nls_charset_name
------------------
UTF8
(1 row)

SELECT NLS_CHARSET_ID('UTF8');
nls_charset_id
----------------
6
(1 row)

5. Procedure Synonyms

Example:

CREATE OR REPLACE PROCEDURE my_procedure IS
BEGIN
DBMS_OUTPUT.PUT_LINE('Hello from my procedure!');
END;
CREATE PROCEDURE

CREATE SYNONYM proc_synonym FOR my_procedure;
CREATE SYNONYM

BEGIN
proc_synonym;
END;
Hello from my procedure!

EDB-SPL Procedure successfully completed

Benefits:

  • Simplified Referencing: Easier to call procedures from different schemas.
  • Improved Maintainability: Changes to procedure paths only need to be updated in the synonym.

Conclusion

These features showcase EDB’s powerful capabilities in handling complex data operations efficiently. By integrating these functionalities into your EDB database strategies, you can significantly enhance the performance and flexibility of your data management processes.

Exploring New DBMS_SQL Subprograms in EDB Postgres Advanced Server 16

EDB Postgres Advanced Server 16 has brought a suite of new subprograms in the DBMS_SQL package, enhancing the capabilities and flexibility for database developers and DBAs. In this blog post, we’ll explore some of these additions, including describe_columns3, describe_columns2, to_cursor_number, to_refcursor, define_array, bind_array, bind_array_raw, define_column_rowid, and column_value_rowid. We’ll delve into their functionalities, provide working code examples, and discuss the benefits these functions offer.

describe_columns3 and describe_column2

These functions provide detailed information about the columns in a query’s result set.

  • describe_columns3: Offers an extended description, including precision and scale of columns.
  • describe_columns2: An enhanced version of the original describe_columns, providing additional details compared to its predecessor.

Benefits: Enhanced metadata retrieval for dynamic SQL, aiding in precise result set processing and validation.

to_cursor_number and to_refcursor

  • to_cursor_number: Converts a ref cursor to a DBMS_SQL cursor number.
  • to_refcursor: Converts a DBMS_SQL cursor number to a ref cursor.

Benefits: Seamless conversion between cursor types enhances compatibility and integration with different EDB-SPL and SQL code parts.

define_array, bind_array and bind_array_raw

  • define_array: Defines an array for bulk fetch operations.
  • bind_array/bind_array_raw: Binds an array for bulk DML operations.

Benefits: Facilitates bulk operations for improved performance and efficiency in handling large data sets.

define_column_rowid and column_value_rowid

  • define_column_rowid: Defines a column of type ROWID in a query’s result set.
  • column_value_rowid: Retrieves the ROWID value from a specified column in the result set.

Benefits: Allows easy retrieval and handling of ROWID values, crucial for row identification and manipulation.

DBMS_SQL Subprograms’ Examples

Following are some examples of the above subprogram with output in EDB-SPL

describe_column3

DECLARE
cursor_id INTEGER;
column_count INTEGER;
desc_tab DBMS_SQL.DESC_TAB3;
BEGIN
-- Open a cursor
cursor_id := DBMS_SQL.OPEN_CURSOR;

-- Parse a query
DBMS_SQL.PARSE(cursor_id, 'SELECT empno, ename, sal FROM emp', DBMS_SQL.NATIVE);

-- Describe the columns
DBMS_SQL.DESCRIBE_COLUMNS3(cursor_id, column_count, desc_tab);

-- Loop through the described columns
FOR i IN 1 .. column_count LOOP
DBMS_OUTPUT.PUT_LINE('Column ' || i || ':');
DBMS_OUTPUT.PUT_LINE('Name: ' || desc_tab(i).col_name);
DBMS_OUTPUT.PUT_LINE('Data Type: ' || desc_tab(i).col_type);
DBMS_OUTPUT.PUT_LINE('Max Length: ' || desc_tab(i).col_max_len);
-- Other column attributes can be accessed similarly
END LOOP;

-- Close the cursor
DBMS_SQL.CLOSE_CURSOR(cursor_id);
EXCEPTION
WHEN OTHERS THEN
DBMS_OUTPUT.PUT_LINE('An error occurred: ' || SQLERRM);
-- Close the cursor if open
IF DBMS_SQL.IS_OPEN(cursor_id) THEN
DBMS_SQL.CLOSE_CURSOR(cursor_id);
END IF;
END;

Column 1:
Name: empno
Data Type: 2
Max Length: 22
Column 2:
Name: ename
Data Type: 1
Max Length: 10
Column 3:
Name: sal
Data Type: 2
Max Length: 22

EDB-SPL Procedure successfully completed

to_cursor_number and to_refcursor

DECLARE
ref_cursor SYS_REFCURSOR;
cursor_number INTEGER;
query VARCHAR2(1000);
a_employee_id emp.empno%TYPE;
a_name emp.ename%TYPE;
BEGIN
-- Open a REF CURSOR for a query
query := 'SELECT empno, ename FROM emp WHERE deptno = 10';
OPEN ref_cursor FOR query; -- Assuming department_id = 10

-- Convert REF CURSOR to DBMS_SQL cursor number
cursor_number := DBMS_SQL.TO_CURSOR_NUMBER(ref_cursor);

-- Define columns for the cursor
DBMS_SQL.DEFINE_COLUMN(cursor_number, 1, a_employee_id);
DBMS_SQL.DEFINE_COLUMN(cursor_number, 2, a_name);

-- Fetch rows from the cursor
WHILE DBMS_SQL.FETCH_ROWS(cursor_number) > 0 LOOP
DBMS_SQL.COLUMN_VALUE(cursor_number, 1, a_employee_id);
DBMS_SQL.COLUMN_VALUE(cursor_number, 2, a_name);
DBMS_OUTPUT.PUT_LINE('Employee ID: ' || a_employee_id || ' Employee Name: ' || a_name);
END LOOP;

-- Close the cursor
DBMS_SQL.CLOSE_CURSOR(cursor_number);
EXCEPTION
WHEN OTHERS THEN
IF DBMS_SQL.IS_OPEN(cursor_number) THEN
DBMS_SQL.CLOSE_CURSOR(cursor_number);
END IF;
RAISE;
END;

Employee ID: 7782 Employee Name: CLARK
Employee ID: 7839 Employee Name: KING
Employee ID: 7934 Employee Name: MILLER

EDB-SPL Procedure successfully completed

DECLARE
ref_cursor SYS_REFCURSOR;
cursor_number INTEGER;
BEGIN
OPEN ref_cursor FOR SELECT * FROM emp;
cursor_number := DBMS_SQL.TO_CURSOR_NUMBER(ref_cursor);
-- Convert back if needed
ref_cursor := DBMS_SQL.TO_REFCURSOR(cursor_number);
END;

EDB-SPL Procedure successfully completed

define_column_rowid, column_value_rowid and describe_column2

SET default_with_rowids TO ON;

CREATE TABLE EMPLOYEE AS SELECT LEVEL AS ID, 'Employee_'||LEVEL AS NAME, LEVEL*1000 AS SALARY FROM DUAL CONNECT BY LEVEL <= 5;

DECLARE
cur NUMBER;
desc_t DBMS_SQL.DESC_TAB2;
col_cnt NUMBER;
status NUMBER;
row_id ROWID;
name VARCHAR2(100);
salary NUMBER;
name_arr DBMS_SQL.VARCHAR2_TABLE;
BEGIN
-- Open a cursor
cur := DBMS_SQL.OPEN_CURSOR;

-- Parse the SQL statement
DBMS_SQL.PARSE(cur, 'SELECT ROWID, NAME, SALARY FROM EMPLOYEE', DBMS_SQL.NATIVE);

-- Describe the columns
DBMS_SQL.DESCRIBE_COLUMNS2(cur, col_cnt, desc_t);

-- Define the output columns
DBMS_SQL.DEFINE_COLUMN_ROWID(cur, 1, row_id);
DBMS_SQL.DEFINE_COLUMN(cur, 2, name, 100);
DBMS_SQL.DEFINE_COLUMN(cur, 3, salary);

-- Execute the SQL statement
status := DBMS_SQL.EXECUTE(cur);

-- Fetch the rows and print the row IDs, names, and salaries
LOOP
IF DBMS_SQL.FETCH_ROWS(cur) = 0 THEN
EXIT;
END IF;

DBMS_SQL.COLUMN_VALUE_ROWID(cur, 1, row_id);
DBMS_SQL.COLUMN_VALUE(cur, 2, name);
DBMS_SQL.COLUMN_VALUE(cur, 3, salary);
DBMS_OUTPUT.PUT_LINE('Row ID: ' || row_id || ', Name: ' || name || ', Salary: ' || salary);
END LOOP;

-- Close the cursor
DBMS_SQL.CLOSE_CURSOR(cur);
END;
Row ID: 1, Name: Employee_1, Salary: 1000
Row ID: 2, Name: Employee_2, Salary: 2000
Row ID: 3, Name: Employee_3, Salary: 3000
Row ID: 4, Name: Employee_4, Salary: 4000
Row ID: 5, Name: Employee_5, Salary: 5000

EDB-SPL Procedure successfully completed

define_array, bind_array and bind_array_raw

CREATE TABLE PERSON AS SELECT LEVEL AS ID, 'Person_'||LEVEL AS NAME FROM DUAL CONNECT BY LEVEL <= 5;

DECLARE
p_ids dbms_sql.number_table;
c NUMBER;
dummy NUMBER;
p_name VARCHAR2(100);
BEGIN
-- Define the array of IDs
p_ids(1) := 2;
p_ids(2) := 3;
p_ids(3) := 4;

-- Open a cursor
c := DBMS_SQL.OPEN_CURSOR;

-- Parse the SQL statement
DBMS_SQL.PARSE(c, 'SELECT name FROM PERSON WHERE id IN (:num_array)', DBMS_SQL.NATIVE);

-- Define the output column
dbms_sql.define_column(c, 1, p_name, 100);

-- Bind the array to the SQL statement
DBMS_SQL.BIND_ARRAY(c, ':num_array', p_ids);

-- Execute the SQL statement
dummy := DBMS_SQL.EXECUTE(c);

-- Fetch the rows and print the names
LOOP
EXIT WHEN dbms_sql.fetch_rows(c) <= 0;
dbms_sql.column_value(c, 1, p_name);
dbms_output.put_line(p_name);
END LOOP;

-- Close the cursor
DBMS_SQL.CLOSE_CURSOR(c);
END;
Person_4

EDB-SPL Procedure successfully completed

Conclusion

The introduction of these new subprograms in EDB Postgres Advanced Server 16’s DBMS_SQL package is a testament to the evolving nature of database technology. They offer greater flexibility, efficiency, and precision in handling dynamic SQL operations, especially where complex data types and bulk processing are involved. By integrating these functionalities, developers and DBAs can significantly enhance their database operations, paving the way for more advanced and optimized data handling solutions.

Harnessing the Power of UTL_FILE in EPAS 16: A Deep Dive into Enhanced File Handling Capabilities

In the realm of EDB Postgres Advanced Server (EPAS) 16 databases, the UTL_FILE package stands as a cornerstone for interacting with the file system from EDB-SPL. This package offers a versatile set of subprograms for file operations, making it an invaluable tool for developers and DBAs alike. In this blog post, we will explore the capabilities of new submodules in UTL_FILE of EPAS 16, such as put_nchar, fgetpos, get_line_nchar, fgetattr, putf_nchar, put_line_nchar, fseek, and fopen_nchar, complete with examples.

What is UTL_FILE?

UTL_FILE is a built-in EDB-SPL package that facilitates reading from and writing to files on the server’s file system. It provides a collection of subprograms that can manage files, allowing for operations like file creation, reading, writing, and positioning.

Why is UTL_FILE Important?

The significance of UTL_FILE lies in its ability to bridge the gap between the database and the file system. It allows for:

  1. File System Integration: UTL_FILE bridges the gap between the EPAS and the server’s file system, allowing for more integrated and efficient data processing.
  2. Automation and Reporting: It’s essential for generating reports, logging, and automating file-based data exchange processes.
  3. Efficient Data Export/Import: Smoothly handle data transfers between the database and external files.
  4. Enhanced File Manipulation: Read, write, and modify files directly from the database.

How UTL_FILE is Changing the Landscape

The enhanced UTL_FILE package in EPAS 16 for developers and DBAs facilitates a more seamless and integrated approach to handling external files, especially in a multilingual context. This ability to directly read from and write to files using a variety of character sets, including NLS data, greatly simplifies data management tasks that involve external file interaction. The package’s capabilities in handling file attributes and positions further empower users to write more efficient and sophisticated file-handling EDB-SPL.

Exploring the Submodules with Examples

1. put_nchar

put_nchar writes a specified NCHAR string to a file.

CREATE OR REPLACE DIRECTORY "DIR" AS '/tmp';
DECLARE
  file_handler UTL_FILE.FILE_TYPE;
BEGIN
  file_handler := UTL_FILE.FOPEN_NCHAR('DIR', 'testfile.txt', 'W');
  UTL_FILE.PUT_NCHAR(file_handler, N'Hello, World!');
  UTL_FILE.FCLOSE(file_handler);
END;

2. fgetpos

fgetpos returns the current position of the file pointer.

CREATE OR REPLACE DIRECTORY "DIR" AS '/tmp';  
DECLARE
  file_handler UTL_FILE.FILE_TYPE;
  pos INTEGER;
BEGIN
  file_handler := UTL_FILE.FOPEN('DIR', 'testfile.txt', 'R');
  UTL_FILE.FGETPOS(file_handler, pos);
  DBMS_OUTPUT.PUT_LINE('Position: ' || pos);
  UTL_FILE.FCLOSE(file_handler);
END;

3. get_line_nchar

get_line_nchar reads a line of NCHAR text from a file.

CREATE OR REPLACE DIRECTORY "DIR" AS '/tmp';

DECLARE
  file_handler UTL_FILE.FILE_TYPE;
  line NVARCHAR2(100);
BEGIN
  file_handler := UTL_FILE.FOPEN_NCHAR('DIR', 'testfile.txt', 'R');
  UTL_FILE.GET_LINE_NCHAR(file_handler, line);
  DBMS_OUTPUT.PUT_LINE(line);
  UTL_FILE.FCLOSE(file_handler);
END;

3. fgetattr

fgetattr retrieves attributes of a file, such as its size, existence, and read/write permissions.

CREATE OR REPLACE DIRECTORY "DIR" AS '/tmp';  

DECLARE
  fexists BOOLEAN;
  file_len NUMBER;
  blocksize NUMBER;
BEGIN
  UTL_FILE.FGETATTR('DIR', 'testfile.txt', fexists, file_len, blocksize);
  IF fexists THEN
    DBMS_OUTPUT.PUT_LINE('File size: ' || file_len);
  ELSE
    DBMS_OUTPUT.PUT_LINE('File does not exist');
  END IF;
END;

5. putf_nchar

putf_nchar writes formatted NCHAR text to a file.

CREATE OR REPLACE DIRECTORY "DIR" AS '/tmp';  

DECLARE
  file_handler UTL_FILE.FILE_TYPE;
BEGIN
  file_handler := UTL_FILE.FOPEN_NCHAR('DIR', 'testfile.txt', 'W');
  UTL_FILE.PUTF_NCHAR(file_handler, 'Name: %s, Age: %s', N'John', N'30');
  UTL_FILE.FCLOSE(file_handler);
END;

6. put_line_nchar

put_line_nchar writes a line of NCHAR text, followed by an end-of-line marker.

CREATE OR REPLACE DIRECTORY "DIR" AS '/tmp';

DECLARE
  file_handler UTL_FILE.FILE_TYPE;
BEGIN
  file_handler := UTL_FILE.FOPEN_NCHAR('DIR', 'testfile.txt', 'W');
  UTL_FILE.PUT_LINE_NCHAR(file_handler, N'Hello, World!');
  UTL_FILE.FCLOSE(file_handler);
END;

7. fseek

fseek moves the file pointer to a specified position.

CREATE OR REPLACE DIRECTORY "DIR" AS '/tmp';

DECLARE
  file_handler UTL_FILE.FILE_TYPE;
BEGIN
  file_handler := UTL_FILE.FOPEN('DIR', 'testfile.txt', 'R');
  UTL_FILE.FSEEK(file_handler, 100); -- Move pointer to 100th byte
  UTL_FILE.FCLOSE(file_handler);
END;

8. fopen_nchar

fopen_nchar opens a file for NCHAR data processing.

CREATE OR REPLACE DIRECTORY "DIR" AS '/tmp';

DECLARE
  file_handler UTL_FILE.FILE_TYPE;
BEGIN
  file_handler := UTL_FILE.FOPEN_NCHAR('DIR', 'testfile.txt', 'W');
  -- Perform file operations
  UTL_FILE.FCLOSE(file_handler);
END;

Conclusion

UTL_FILE in EPAS offers a powerful suite of tools for file operations within the database environment. By leveraging these submodules, developers and DBAs can perform a wide range of file manipulation tasks, enhancing the overall capabilities of EPAS. Whether it’s for data export, report generation, or handling complex file-based operations, UTL_FILE stands as a pivotal feature in the Postgres database landscape.

Partition pruning in EDB Postgres 9.5

One of my colleague who was recently working with a customer has presented a customer case. According to him, customer has a partitioned table and EDB Postgres was not applying the partition pruning in his query. So, I thought to blog about partition pruning, so that EDB Postgres developers and DBAs can benefit.

EDB Postgres supports two types of partition pruning:

Constraint exclusion pruning:

It is a feature introduced in Postgresql 8.1. This type of pruning works with PostgreSQL-style of partition. With constraint exclusion enabled, the planner will examine the constraints of each partition and try to prove that the partition need not be scanned because it could not contain any rows meeting the query’s WHERE clause. When the planner can prove this, it excludes the partition from the query plan.

However, it has some limitations. Following is the limitation of constraint_exclusion:

a. Constraint exclusion only works when the query’s WHERE clause contains constants (or externally supplied parameters). For example, a comparison against a non-immutable function such as CURRENT_TIMESTAMP cannot be optimized, since the planner cannot know which partition the function value might fall into at run time.
b. Keep the partitioning constraints simple, else the planner may not be able to prove that partitions don’t need to be visited. Use simple equality conditions for list partitioning, or simple range tests for range partitioning, as illustrated in the preceding examples. A good rule of thumb is that partitioning constraints should contain only comparisons of the partitioning column(s) to constants using B-tree-indexable operators.

For verification, below shows the behavior of constraint_exclusion pruning:
1. Let’s create PostgreSQL-style partition table using table inheritance feature.

CREATE TABLE measurement (
     city_id        int not null,
     logdate        date not null,
     peaktemp        int,
     unitsales      int
 );
CREATE TABLE measurement_y2004m02 (
     CHECK ( date_part('month'::text, logdate) = 2)
 ) INHERITS (measurement);
CREATE TABLE measurement_y2004m03 (
     CHECK ( date_part('month'::text, logdate) = 3 )
 ) INHERITS (measurement);

  1. Execute simple query to verify the constraint_exclusion behavior based on above definition:
 edb=# EXPLAIN (costs off) SELECT count(*) FROM measurement WHERE date_part('month'::text, logdate) = 3;
                                    QUERY PLAN                                     
-----------------------------------------------------------------------------------
 Aggregate
   ->  Append
         ->  Seq Scan on measurement
               Filter: (date_part('month'::text, logdate) = '3'::double precision)
         ->  Seq Scan on measurement_y2004m02
               Filter: (date_part('month'::text, logdate) = '3'::double precision)
         ->  Seq Scan on measurement_y2004m03
               Filter: (date_part('month'::text, logdate) = '3'::double precision)
(8 rows)

Above output of the query shows EDB Postgres considered all partitions of table measurements, even though we had included partition column and constant value in WHERE clause. This is due to check constraint which has date_part function. date_part function is not immutable in Postgres, therefore at planning time, EDB Postgres doesn’t know what value it will return. And, if user doesn’t include proper WHERE clause as mentioned in check constraint, pruning will not work.

In Postgres you can make a function immutable by using ALTER FUNCTION command.

In below example, we will make date_part function immutable to check if constraint_exclusion works with date_part immutable function or not:

  1. Convert date_part function to immutable :
edb=# ALTER FUNCTION date_part (text, timestamp without time zone ) immutable;
ALTER FUNCTION
  1. Perform EXPLAIN command to check the behavior of constraint_exclusion using immutable function:
edb=# EXPLAIN (costs off) SELECT count(*) FROM measurement WHERE date_part('month'::text, logdate) = 3;
                                    QUERY PLAN
-----------------------------------------------------------------------------------
 Aggregate
   ->  Append
         ->  Seq Scan on measurement
               Filter: (date_part('month'::text, logdate) = '3'::double precision)
         ->  Seq Scan on measurement_y2004m03
               Filter: (date_part('month'::text, logdate) = '3'::double precision)
(6 rows)

As you can see with immutable function EDB Postgres was able to perform constraint_exclusion pruning.

What if we change the WHERE clause little bit and include < and = operator in our SQL queries (below are examples)

edb=#  EXPLAIN (costs off) SELECT count(*) FROM measurement WHERE logdate < DATE '2004-03-01';                                      QUERY PLAN                                       -------------------------------------------------------------------------------------  Aggregate    ->  Append
         ->  Seq Scan on measurement
               Filter: (logdate < '01-MAR-04 00:00:00'::timestamp without time zone)          ->  Seq Scan on measurement_y2004m02
               Filter: (logdate < '01-MAR-04 00:00:00'::timestamp without time zone)          ->  Seq Scan on measurement_y2004m03
               Filter: (logdate < '01-MAR-04 00:00:00'::timestamp without time zone) (8 rows) edb=# EXPLAIN (costs off) SELECT count(*) FROM measurement WHERE logdate = DATE '2004-02-01';                                      QUERY PLAN                                       -------------------------------------------------------------------------------------  Aggregate    ->  Append
         ->  Seq Scan on measurement
               Filter: (logdate = '01-FEB-04 00:00:00'::timestamp without time zone)
         ->  Seq Scan on measurement_y2004m02
               Filter: (logdate = '01-FEB-04 00:00:00'::timestamp without time zone)
         ->  Seq Scan on measurement_y2004m03
               Filter: (logdate = '01-FEB-04 00:00:00'::timestamp without time zone)
(8 rows)

As you can see with a change in WHERE clause and exclusion of the way constraint defined on partition, Postgres will scan all partitions.

Based on above we can conclude that if a user is planning to use Postgres way of partition then they have to be careful about the constraint definition in order to utilize constraint_exclusion pruning.

Lets modify the definition of measurement table and verify the ,=, <= and = operator in WHERE clause.

CREATE TABLE measurement (
     city_id        int not null,
     logdate        date not null,
     peaktemp        int,
     unitsales      int
 ); 
CREATE TABLE measurement_y2004m02 (
     CHECK ( logdate >= DATE '2004-02-01' AND logdate < DATE '2004-03-01' )  ) INHERITS (measurement); CREATE TABLE measurement_y2004m03 (      CHECK ( logdate >= DATE '2004-03-01' AND logdate < DATE '2004-04-01' )
 ) INHERITS (measurement);

Below is explain plan based on above definition:

edb=# EXPLAIN (costs off) SELECT count(*) FROM measurement WHERE logdate < DATE '2004-03-01';                                      QUERY PLAN                                       -------------------------------------------------------------------------------------  Aggregate    ->  Append
         ->  Seq Scan on measurement
               Filter: (logdate < '01-MAR-04 00:00:00'::timestamp without time zone)          ->  Seq Scan on measurement_y2004m02
               Filter: (logdate < '01-MAR-04 00:00:00'::timestamp without time zone) (6 rows) edb=# EXPLAIN (costs off) SELECT count(*) FROM measurement WHERE logdate = DATE '2004-03-01';                                      QUERY PLAN                                       -------------------------------------------------------------------------------------  Aggregate    ->  Append
         ->  Seq Scan on measurement
               Filter: (logdate = '01-MAR-04 00:00:00'::timestamp without time zone)
         ->  Seq Scan on measurement_y2004m03
               Filter: (logdate = '01-MAR-04 00:00:00'::timestamp without time zone)
(6 rows)

Above clearly shows that with correct constraint definition, constriant_exclusion pruning can work for >,<,>=, <= and = operator in WHERE clause.

Fast pruning:

EDB Postgres has CREATE TABLE PARTITION SYNTAX since version 9.1. PARTITION SYNTAX in EDB Postgres uses one more pruning called fast pruning. Fast pruning uses the partition metadata and query predicates to efficiently reduce the set of partitions to scan. Fast pruning in EDB Postgres happens before query plan. Let’s verify the behavior of fast pruning.
As mentioned fast pruning works with partition which user created using EDB Postgres CREATE TABLE PARTITION Syntax. Let’s modify the above definition of measurement table to use CREATE TABLE PARTITION SYNTAX as given below:

CREATE TABLE  measurement (
     city_id        int not null,
     logdate        date not null,
     peaktemp        int,
     unitsales      int
 )
PARTITION BY RANGE(logdate)
(PARTITION y2004m01 VALUES LESS THAN ('2004-02-01'),
 PARTITION y2004m02 VALUES LESS THAN ('2004-03-01'),
 PARTITION y2004m03 VALUES LESS THAN ('2004-04-01')
);
edb=# EXPLAIN (costs off) SELECT count(*) FROM measurement WHERE logdate < DATE '2004-03-01';                                      QUERY PLAN                                       -------------------------------------------------------------------------------------  Aggregate    ->  Append
         ->  Seq Scan on measurement
               Filter: (logdate < '01-MAR-04 00:00:00'::timestamp without time zone)          ->  Seq Scan on measurement_y2004m01
               Filter: (logdate < '01-MAR-04 00:00:00'::timestamp without time zone)          ->  Seq Scan on measurement_y2004m02
               Filter: (logdate < '01-MAR-04 00:00:00'::timestamp without time zone) (8 rows) edb=# EXPLAIN (costs off) SELECT count(*) FROM measurement WHERE logdate = DATE '2004-03-01';                                      QUERY PLAN                                       -------------------------------------------------------------------------------------  Aggregate    ->  Append
         ->  Seq Scan on measurement
               Filter: (logdate = '01-MAR-04 00:00:00'::timestamp without time zone)
         ->  Seq Scan on measurement_y2004m03
               Filter: (logdate = '01-MAR-04 00:00:00'::timestamp without time zone)
(6 rows)

edb=# EXPLAIN (costs off) SELECT count(*) FROM measurement WHERE logdate > DATE '2004-03-01';
                                     QUERY PLAN                                      
-------------------------------------------------------------------------------------
 Aggregate
   ->  Append
         ->  Seq Scan on measurement
               Filter: (logdate > '01-MAR-04 00:00:00'::timestamp without time zone)
         ->  Seq Scan on measurement_y2004m03
               Filter: (logdate > '01-MAR-04 00:00:00'::timestamp without time zone)
(6 rows)

For more information on EDB Postgres pruning please refer following link:
https://www.enterprisedb.com/docs/en/9.5/oracompat/Database_Compatibility_for_Oracle_Developers_Guide.1.327.html#

Meet BART – A New Tool for Backup And Recovery Management

EnterpriseDB recently launched a new tool for backup and recovery – named simply EDB Backup and Recovery Tool, or BART. This tool makes the DBA’s life easier by simplifying the tasks for managing their Postgres physical backup and recovery tasks, whether they are PostgreSQL or Postgres Plus Advanced Server deployments.

BART has the following advantages over custom scripts for managing backups:

1. It’s stable and it uses the tool pg_basebackup to take a physical backup. This tool has been well defined and is well-supported by the PostgreSQL community.

2. It catalogs all of the backups users are taking, which is important in terms of:
    i. Listing the type of backups used
   ii. Listing the status of those backups with server information.

3. BART also provides functionality to restore backups, with all required archived WAL files. So automation around this tool will make DBAs’ lives easier for restore and recovery.

4. BART provides an option to validate your backup by using checksum. This is useful for confirming you took a valid backup and it is not corrupted at disk level.

5. BART provides an option to define your retention policy around the backups you are keeping.

Given all of the above advantages, I decided to give this new tool a try and share some tips. To get started, you need the following prerequisites:

1. BART currently requires a Linux 64 bit platform, CentOS 6.x or RHEL 6.x
2. Need to have password-less, direct SSH access to the target machine where you want to restore backups as well as the database servers you want backed up
3. Install the Postgres Plus Advanced Server or PostgreSQL binaries for pg_basebackup

Yum or rpm

To install this tool, you have two options that I will explore below:

1. Yum command
2. Rpm command.

Using the yum command:

To perform a yum command installation, BART users can ask EDB for credentials to the EnterpriseDB yum repository and configure the their local yum repository as follows:

echo &quot;[tools]
name=EnterpriseDB Tools
baseurl=http://username:password@yum.enterprisedb.com/tools/redhat/rhel-$releasever-$basearch
enabled=1
gpgcheck=0&quot; &gt; /etc/yum.repos.d/edbtools.repo

After creating the yum repo, the user can execute the following command to install BART:

 yum install edb-bart

If the user doesn’t want to install the EDB Backup and Recovery Tool using the yum command, then the user can download a free standing rpm using the link below from EDB’s website:

http://www.enterprisedb.com/downloads/postgres-postgresql-downloads

and then enter the rpm install command as follows:

rpm -ivh edb-bart-1.0.1-1.rhel6.x86_64.rpm

After installing BART using the above commands, the user can see the binaries in the directory:/usr/edb-bart-1.0/bin and a sample BART configuration file in /usr/edb-bart-1.0/etc

That’s a very easy installation.

For more information on configuring BART Host and Database Host, the following are some documents that will help:
1. pg_basebackup configuration for PostgreSQL:
http://www.postgresql.org/docs/current/static/app-pgbasebackup.html

2. For direct password less ssh configuration user can refer following link
http://www.enterprisedb.com/docs/en/1.0/bart/EDB_Backup_and_Recovery_Tool_Guide-17.htm#P1008_76316

After the installation of the BART binaries, the user also has to create a BART configuration file.

The following is a sample configuration file for BART:

[BART]
bart-host= enterprisedb@127.0.0.1
backup_path = /opt/backup
pg_basebackup_path = /usr/ppas-9.4/bin/pg_basebackup
logfile = /tmp/bart.log

[PG]
host = 127.0.0.1
port = 5432
user = postgres
description = &quot;Postgres server&quot;

[PPAS94]
host = 127.0.0.1
port = 5444
user = enterprisedb
description = &quot;PPAS 94 server&quot;

Global Configuration Settings

Content under the [BART] tag are called global configuration settings. Under this tag are the following:

1. bart-host: the IP address of the host on which BART is installed. The value for this parameter must be specified in the form: bart_user@bart_host_address, where bart_user is the operating system user account on the BART host that is used to run BART and owns the BART backup catalog directory. bart_host_address is the IP address of the BART host.

2. backup_path: specifies the file system parent directory where all BART database server base backups and archived WAL files are stored. This parameter is required.

3. pg_basebackup_path: specifies the path to the pg_basebackup program of the Postgres database server installed on the BART host.

4. log file: specifies the path to the BART log file. This parameter is optional. If no path to a log file is specified after logfile =, or if the parameter is commented out, BART does not create a log file.

The remaining part of configuration file is self-explanatory. The TAG: [PG]/[PPAS94] part is content for servers which the user wants to back up.

Pg_basebackup Settings

After performing the above configuration on the Backup Server, the user has to do set following settings on the servers that they want to back up. Below are the settings for enabling backup using pg_basebackup.

The user has to set a few parameters in PostgreSQL postgresql.conf file, which he wants to backup:

1. wal_level parameter to archive or hot_standby.
2. archive_mode=on
3. archive_command setting.
4. max_wal_senders to 1 or more than one, since pg_basebackup uses the replication protocol to copy data directory.

For more information on each setting please refer to the following:
1. wal_level:
http://www.postgresql.org/docs/9.4/static/runtime-config-wal.html

2. archive_mode and archive_command:
http://www.postgresql.org/docs/9.4/static/runtime-config-wal.html#RUNTIME-CONFIG-WAL-ARCHIVING

3. max_wal_senders:
http://www.postgresql.org/docs/9.4/static/runtime-config-replication.html
http://www.enterprisedb.com/docs/en/1.0/bart/EDB_Backup_and_Recovery_Tool_Guide-19.htm#TopOfPage

With the above settings, the user then needs to update the pg_hba.conf file for the replication connection.

Note: The above settings are for pg_basebackup to take backups using replication protocols. In case users need more information about pg_basebackup and settings, please use the above mentioned link

How BART Works

Now, since we have configured both servers, let’s have a look how BART works.

The following command executes a backup:

 bart -c bart.cfg BACKUP -s ppas94

And below is the output:

[bart@localhost ~]$ bart -c bart.cfg BACKUP -s ppas94

INFO:  creating backup for server 'ppas94'
INFO:  backup identifier: '1413852137762'
6394456/6394456 kB (100%), 1/1 tablespace

INFO:  backup checksum: 7f49ea9653511308710c174f22ec765d
INFO:  backup completed successfully
[bart@localhost ~]$ 

That was an easy way to take a backup. The DBA can also create a job to execute the above command to take backups.

If the user wants to list the backup using BART, the user can use the option SHOW-BACKUPS:

[bart@localhost ~]$ bart -c bart.cfg SHOW-BACKUPS -s ppas94
 Server Name   Backup ID       Backup Time           Backup Size  
                                                                  
 ppas94        1413852137762   2014-10-20 17:43:41   6244.59 MB   

This is useful for knowing what backups a user has available for recovery. The above command gives important information:

1.	Backup ID: It’s a unique ID for the physical backup
2.	Backup Time: Time when backup was taken
3.	Backup Size: Size of backup

This information is useful when a user wants to plan for recovery using backup. This way, the user can also plan for disk size.

Sometimes a user wants to verify their backup state. VERIFY-CHKSUM option is useful in this case:

[bart@localhost ~]$ bart -c bart.cfg VERIFY-CHKSUM -s ppas94 -i 1413852137762
 Server Name   Backup ID       Verify  
                                       
 ppas94        1413852137762   OK      

I have to say, after putting EDB BART through its paces, I think DBAs will enjoy having such a great tool for making Backup Management easy.

In my next post, I will blog about the Recovery process.

New in Postgres Plus Advanced Server 9.2

Good News Postgres Plus Advanced Server Beta Version is now availabale, which has all the new feature of PostgreSQL 9.2, and it also has new features which are specific to Advanced Server.

I will cover/explain New features of PostgreSQL 9.2 later as per release Notes. However, the following link covers major features of PostgreSQL 9.2 and can be taken as reference.

http://wiki.postgresql.org/wiki/What's_new_in_PostgreSQL_9.2

In this Blog, I am going to cover only specific features of Advanced Server 9.2 Core. Which are:

1. INSERT APPEN HINT in PPAS 9.2

PPAS 9.2, now supports INSERT append hint. This is a very interesting feature and it is very useful for users who frequently delete records in Bulk and do bulk of INSERTs.
This hint can provide some benefits in INSERTs. This hint makes PPAS not use Free Space Map and Append the rows at the end of relation(table).
Its usage is given below:

INSERT /*+append*/ INTO tab21115 VALUES(1,'abc',sysdate);

2. Procedure Called like Function Call.

PPAS 9.2 now allows calling procedure with following syntax:

SELECT * FROM procedure(arg1,arg2,…);

This type of Procedure call is allowed for Procedures which have OUT/INOUT Params. With this, user can use exec Function.

lets see how it works:

a. Create a Procedure as Given below:

CREATE OR REPLACE PROCEDURE foo_proc(A IN INT, B INOUT INT, C OUT INT)
AS
BEGIN
   b:=a+b;
   c:=b+1;
END;

Till 9.1 and even in 9.2, user can do something like given below:

DECLARE
  d int:=1;
  e int:=2;
  f int;
BEGIN
   foo_proc(d,e,f);
   DBMS_OUTPUT.PUT_LINE('e = '||e);
   DBMS_OUTPUT.PUT_LINE('d = '||d);
   DBMS_OUTPUT.PUT_LINE('f= '||f);
END;
e = 3
d = 1
f= 4

Now in 9.2, user can also do this:

edb=# select * from foo_proc(1,2);
 b | c 
---+---
 3 | 4
(1 row)

In case the user has Procedure and Function with a same name and the user wants to execute procedure using select command, then SELECT command is going to give preference to Function over Procedure as given below:

Function Definition:

CREATE OR REPLACE FUNCTION foo(a INT) RETURN INT
AS
BEGIN
  RETURN 2;
END;

CREATE PROCEDURE foo(A int, B OUT int)
AS
BEGIN
   B:=a+1;
END;

edb=# select * from foo(4);
 foo 
-----
   2

However, user can still use EXEC command to execute Procedure as given below:

edb=# exec foo(4);
 ?column? 
----------
        5
(1 row)

Or if you are using Anonymous function then PERFORM will also work with Procedure as given below:

edb=# DECLARE
edb-#     a int:=4;
edb$#     b int;
edb$# BEGIN
edb$#    PERFORM foo(a,b);
edb$#    DBMS_OUTPUT.PUT_LINE('b = '||b);
edb$# END;
b = 5

EDB-SPL Procedure successfully completed

3. Object Type Support:

PPAS already has Object Type support. However, the new version is coming with a new enhancement in that support. In New Version, user would be able to create Objects with Attributes, Functions, and Procedures. This gives an advantage of reduced coding in terms of defining Object types and maintaining it.

Example is given below:

a. Define a Type:

    CREATE OR REPLACE TYPE PersonObj AS OBJECT (
  first_name  VARCHAR2(50),
  last_name   VARCHAR2(50),
  date_of_birth  DATE,
  MEMBER FUNCTION getAge RETURN NUMBER
);

b. Define a Type Body:

CREATE OR REPLACE TYPE BODY PersonObj AS
  MEMBER FUNCTION getAge RETURN NUMBER AS
  BEGIN
    RETURN Trunc(Months_Between(Sysdate, date_of_birth)/12);
  END getAge;
END;

c. Define a Table based on Body type:

CREATE TABLE people (
  id      NUMBER(10) NOT NULL,
  person  PersonObj
);

d. To insert Data, Default constructor can be use as given below:

INSERT INTO people
VALUES (1, PersonObj('John','Doe',
        TO_DATE('01/01/1999','DD/MM/YYYY')));

e. With the following way, a user can access the data in the table:

SELECT p.id,
       (p.person).first_name,
       p.person.getAge() age
FROM   people p;
 id | first_name | age 
----+------------+-----
  1 | John       |  13
  2 | Jane       |  13
(2 rows)

4. PL/SQL Subtypes:

Subtypes is considered as a TYPE, which is defined on base/user defined types with some constraints, syntax for declaring SUBTYPE in PL/SQL is given below:

SUBTYPE <subtype_name> IS <type_name> [(constraint)] [NOT NULL];

Example is given below:

Example 1:

DECLARE
  SUBTYPE INT2 IS NUMBER;
  v2 INT2 (3,0);
BEGIN
  v2:= 123;
  DBMS_OUTPUT.PUT_LINE ('V2 = '|| v2);
END;

Example 2:

DECLARE
  SUBTYPE v_word IS VARCHAR2(10) NOT NULL;
  verb  v_word  := 'verb';
  noun  v_word  := 'noun';
BEGIN
  noun :='n1';
  DBMS_OUTPUT.PUT_LINE (UPPER(verb));
  DBMS_OUTPUT.PUT_LINE (UPPER(noun));
END;

5. DROP TABLE CASCADE CONSTRAINTS in 9.2:

PPAS 9.2 now supports DROP TABLE CASCADE CONSTRAINTS, this is a very intersting feature and is not available in PostgreSQL 9.2.

Using this command, user would be able to drop all referential integrity constraints that refer to primary and unique keys in the dropped table. If you omit this clause and such referential integrity constraints exist, then the database returns an error and does not drop the table.

This is useful, when somebody wants to drop Master table but doesn’t want to drop child table, which has Foreign Key Constraint and is referencing the Master. An example is given below:

CREATE TABLE master(id numeric primary key);
CREATE table child(id numeric references master(id));
insert into master values(1);
insert into child values(1);
edb=# drop table master cascade constraints;
NOTICE:  drop cascades to constraint child_id_fkey on table child
DROP TABLE
edb=# \d child
  Table "enterprisedb.child"
 Column |  Type   | Modifiers 
--------+---------+-----------
 id     | numeric | 

As you can see above, I have dropped the Master table, on which child table has referential integrity constaint. However, my Child table still exists. PostgreSQL 9.2, doesn’t have CASCADE CONSTRAINTS. Therefore, user has to first drop the constraints on table which has referential integrity constraints and then drop the referenced table. This process requires user to find all tables which has referential integrity constraints that refer to primary key of master table. This commands helps DBAs/Users not to worry about finding the child tables.

6. Allow throwing Multiple Errors as Warning, while compiling Procedures/Functions using SPL.

This is also a very interesting enhancement found for SPL. Whenever the user wants to compile a function/procedure in SPL, it will show all the error messages in the particular Function/Procedure Body, so that the user doesn’t have to re-visit their code everytime to fix the other issue. An example is given below:

CREATE OR REPLACE FUNCTION foo() RETURN integer
AS
BEGIN
   a:=a=1
;
B:=b + 1;
I am making fool;
END;

Output:
ERROR:  "a" is not a known variable
LINE 4:    a:=a=1
           ^
ERROR:  "b" is not a known variable
LINE 6: B:=b + 1;
        ^
ERROR:  syntax error at or near "I"
LINE 7: I am making fool;
        ^
ERROR:  compilation of SPL function/procedure "foo" failed due to 3 errors

As you can see above, it has given all the mistakes that I made in Function Body, and now I can fix all of them in one go, to reduce the overhead of executing again and again.

This is very useful for Developers/DBAs.

7. DEFAULT for Types declared in Packages:

Now, in PPAS 9.2, user would be able to define DEFAULT value for particular type, as given below:

CREATE OR REPLACE PACKAGE INITPKG_pro_b4_default AS
PROCEDURE show_default_values;
 n1 NUMBER DEFAULT 20;
 c1 VARCHAR2 DEFAULT 'Default';
END;

CREATE OR REPLACE PACKAGE BODY INITPKG_pro_b4_default AS
 PROCEDURE show_default_values IS
  n number;
 BEGIN
  dbms_output.put_line(c1);
  dbms_output.put_line(n1);
  n1 := n1*n1;
  n1 := SQRT(n1);
  dbms_output.put_line(n1);
 END;
END;

edb=# exec INITPKG_pro_b4_default.show_default_values;
Default
20
20.000000000000000

EDB-SPL Procedure successfully completed

8. TABLE Expression support for Nested Tables.

Now, in 9.2, user would be able to use TABLE Expressions for Nested tables. This feature was missing and asked by many Oracle DBAs. Table Expressions allows the user to query a collection in the FROM Clause like a Table.

edb=# CREATE OR REPLACE TYPE comp_typ_str_21189 IS TABLE OF VARCHAR2(100);
CREATE TYPE
edb=# 
edb=# select * from table(comp_typ_str_21189('PPP','QQQ')) ;
 column_value 
--------------
 PPP
 QQQ
(2 rows)

9. INPUT/OUTPUT Functions for NESTED TABLES:

PPAS 9.2 supports user defined input/output functions for NESTED TABLE. This feature was missing in 9.1.

Some examples are given below:

edb=# create or replace type mytab as table of varchar2(90);
CREATE TYPE
edb=# Create or replace function fun return mytab
edb-# as
edb$# begin
edb$#  return mytab('a','b',3);
edb$# end;
CREATE FUNCTION
edb=# select fun;
 fun21168 
----------
 {a,b,3}
(1 row)

edb=# 
edb=# create or replace function fun return mytab
edb-# as
edb$#  nvar mytab;
edb$# begin
edb$#  nvar := mytab();
edb$#  nvar.extend(4);
edb$#  nvar(1) := 'foo';
edb$#  nvar(2) := NULL;
edb$#  nvar(3) := 'deleteme';
edb$#  nvar(4) := 'bar';
edb$#  return nvar;
edb$# end;
CREATE FUNCTION
edb=# --Verify User's is able to see that data
edb=# select fun;
        fun21168         
-------------------------
 {foo,NULL,deleteme,bar}
(1 row)

9. LOG is no more a reserved keyword for functions.
Till PPAS 9.1, LOG was a reserved keyword and users were not allowed to create functions using this Keyword. In 9.2, a user would be able to use this keyword.

edb=# CREATE OR REPLACE FUNCTION log( a int ) return int as
edb$# BEGIN
edb$# dbms_output.put_line('Function LOG is called');
edb$# return a*2;
edb$# END;
CREATE FUNCTION
edb=# SELECT LOG(10);
Function LOG is called
 log 
-----
  20
(1 row)

10. Variables can be named as current_date.

We know that current_date is special function which returns current date of PostgreSQL/PPAS DB. Till PPAS 9.1, users were not allowed to use this reserved name in variables. Now, in 9.2, users would be able to use it. As given below:

edb=# create table t_currentdate(current_date int);
CREATE TABLE
edb=# desc t_currentdate
    Table "enterprisedb.t_currentdate"
    Column    |  Type   | Modifiers 
--------------+---------+-----------
 current_date | integer | 

create or replace procedure proc_currentdate(current_date date) is
begin
 dbms_output.put_line(current_date);
end;

edb=# exec proc_currentdate(current_date);
05-OCT-12 00:00:00

EDB-SPL Procedure successfully completed

11. New Data Type STRING.

Since Oracle supports string as data type and this datatype wasn’t available in PPAS 9.1, we have included it in 9.2. String is an alias to VARCHAR Data type. Example is given below:

edb=# CREATE TABLE test_string(col string)
edb-# ;
CREATE TABLE
edb=# insert into test_string values('As an Example');
INSERT 0 1
edb=# select * from test_string;
      col      
---------------
 As an Example
(1 row)

edb=# \d test_string
    Table "enterprisedb.test_string"
 Column |       Type        | Modifiers 
--------+-------------------+-----------
 col    | character varying | 

12. NVARCHAR2 data type support in 9.2

PPAS 9.2 allowes users to use NVARCHAR2 as a datatype. In PPAS 9.2, NVARCHAR2 is mapped to VARCHAR data type. This is only for an Oracle user, who always asked question on having this in PPAS.

Example is given below:

edb=# CREATE TABLE test_nvarchar(col NVARCHAR2(10))
edb-# ;
CREATE TABLE
edb=# \d test_nvarchar
     Table "enterprisedb.test_nvarchar"
 Column |         Type          | Modifiers 
--------+-----------------------+-----------
 col    | character varying(10) | 

13. MultiByte and string delimiter support in EDB*loader.

EDB*Loader in PPAS 9.2, is more efficient, compared to 9.1, in loading data. There were some cases, in which it has been observed that it is difficult to find single character delimiter for data in file as the data in a file has all possible characters. To load those kind of data files, users can now define there own STRING Delimiter. An example is given below:

Data File: data.log:
1$$abc
2$$ccc
3$$dddddddd

Control FIle:
LOAD DATA
INFILE '/tmp/data.log'
BADFILE '/tmp/data.bad'
truncate INTO table edb_loader
FIELDS TERMINATED BY '$$' optionally enclosed by '"'
(id , col )

-bash-4.1$ edbldr userid=enterprisedb/ control=/tmp/edb.ctl log=/tmp/edb.log
EDB*Loader: Copyright (c) 2007-2012, EnterpriseDB Corporation.

Successfully processed (3) records
-bash-4.1$ psql
psql (9.2.0.1)
Type "help" for help.

edb=# select * from edb_loader ;
 id |   col    
----+----------
  1 | abc
  2 | ccc
  3 | dddddddd
(3 rows)

Workaround for SMTP Exceptions in PPAS9.0

Till PPAS 9.0, UTL_SMTP package has no specific named Exceptions for Transient_error, Invalid_operation and Permanent Error. However, Oracle provide following types of named exceptions for SMTP:
1. INVALID_OPERATION:
Raised when an invalid operation is made. In other words, calling API other than write_data(), write_raw_data() or close_data() after open_data() is called, or calling write_data(), write_raw_data() or close_data() without first calling open_data().

2. TRANSIENT_ERROR:
Raised when receiving a reply code in 400 range.

3. PERMANENT_ERROR:
Raised when receiving a reply code in 500 range.

Oracle users who use SMTP packages, they also use above exceptions extensively to track the SMTP Error/Message and perform some handling on basis of exception.

Since, till now PPAS doesn’t have these exceptions, therefore people stuck on finding workaround for such exception.

To make this easy, I did some research and made following workaround, which user can use in PPAS for SMTP Exceptions as they do in Oracle.

Following are workarounds for SMTP named Exceptions:

There is no direct way to trap SMTP reply code, However PPAS SQLERRM does have description of SMTP Reply code. So using SQLERRM, user can implement or defined named Exceptions, which is pretty simple and straight.

Easiest way of doing is to create functions, which can decide if exception error is TRANSIENT, PERMANENT or INVALID_OPERATION. So, user can create following functions in PPAS:

1. Function For transient Error Message:

CREATE OR REPLACE FUNCTION SMTP_TRANSIENT_ERROR(text) RETURNS boolean
As
$$
SELECT CASE WHEN
$1 ILIKE '%Service not available%' OR 
$1 ILIKE '%Requested mail action not taken%' OR 
$1 ILIKE '%Requested action terminated%' OR
$1 ILIKE '%Requested action not taken%' OR
$1 ILIKE '%You have no mail%' OR
$1 ILIKE '%TLS not available due to temporary reason%. Encryption required for requested authentication mechanism%' OR
$1 ILIKE '%Unable to queue messages for %'  OR
$1 ILIKE '%Node%not allowed%' THEN true
ELSE false END from dual;
$$ language sql;

2. Function for Permanent Error Message:

CREATE OR REPLACE FUNCTION SMTP_PERMANENT_ERROR(text) RETURNS Boolean 
AS
$$
SELECT CASE WHEN 
$1 ILIKE '%Syntax error%command unrecognized%' OR 
$1 ILIKE '%Syntax error in parameters or arguments%' OR
$1 ILIKE '%Command not implemented%' OR
$1 ILIKE '%Bad sequence of commands%' OR
$1 ILIKE '%Command parameter not implemented%' OR
$1 ILIKE '%does not accept mail%' OR
$1 ILIKE '%Must issue a STARTTLS command first. Encryption required for requested authentication mechanism%' OR
$1 ILIKE '%Authentication mechanism is too weak%' OR
$1 ILIKE '%Encryption required for requested authentication mechanism%' OR
$1 ILIKE '%Requested action not taken%' OR
$1 ILIKE '%User not local; please try%' OR 
$1 ILIKE '%Requested mail action terminated: exceeded storage allocation%' OR
$1 ILIKE '%Requested action not taken:%' OR
$1 ILIKE '%Transaction failed%' THEN true ELSE false END FROM DUAL;
$$ language sql

3. Function for INVALID_OPERATION

CREATE OR REPLACE FUNCTION SMTP_INVALID_OPERATION(TEXT) RETURNS BOOL
AS
$$ SELECT CASE WHEN $ ILIKE '%INVALID%OPERATION%STATE%' THEN TRUE ELSE FALSE END FROM DUAL;
$$ Language sql;

Below are some working examples:

In Oracle (INVALID_OPERATION) :


CREATE OR REPLACE PROCEDURE SEND_MAIL (
 msg_to      varchar2,
 msg_subject varchar2,
 msg_text    varchar2   )
IS
 c  utl_smtp.connection;
 rc integer;
 msg_from    varchar2(50) := 'Oracle9.2';
 mailhost    VARCHAR2(30) := '127.0.0.1';   -- local database host

BEGIN
 c := utl_smtp.open_connection(mailhost, 25); -- SMTP on port 25    
 utl_smtp.helo(c, mailhost);
 utl_smtp.mail(c, msg_from);
 utl_smtp.rcpt(c, msg_to);
utl_smtp.write_data(c, 'MESSG'||UTL_TCP.CRLF); --Added By me

 utl_smtp.data(c,'From: Oracle Database' || utl_tcp.crlf ||
                        'To: ' || msg_to || utl_tcp.crlf ||
                        'Subject: ' || msg_subject ||
                         utl_tcp.crlf || msg_text);
 utl_smtp.quit(c);

 EXCEPTION
   WHEN UTL_SMTP.INVALID_OPERATION THEN
      dbms_output.put_line(' Invalid Operation in Mail attempt    
                             using UTL_SMTP.');
   WHEN UTL_SMTP.TRANSIENT_ERROR THEN
      dbms_output.put_line(' Temporary e-mail issue - try again'); 
   WHEN UTL_SMTP.PERMANENT_ERROR THEN
      dbms_output.put_line(' Permanent Error Encountered.'); 
END;

SQL> exec send_mail(msg_to=>'vibhor.aim@gmail.com', -
              msg_subject => 'Hello from Oracle', -
              msg_text    => 'This is the body of the message'-
           );
Invalid Operation in Mail attempt
			      using UTL_SMTP.

PL/SQL procedure successfully completed.

In PPAS:

CREATE OR REPLACE PROCEDURE SEND_MAIL (
 msg_to      varchar2,
 msg_subject varchar2,
 msg_text    varchar2   )
IS
 c  utl_smtp.connection;
 rc integer;
 msg_from    varchar2(50) := 'PPAS 9.0';
 mailhost    VARCHAR2(30) := '127.0.0.1';   -- local database host

BEGIN
 c := utl_smtp.open_connection(mailhost, 25); -- SMTP on port 25    
 utl_smtp.helo(c, mailhost);
 utl_smtp.mail(c, msg_from);
 utl_smtp.rcpt(c, msg_to);
utl_smtp.write_data(c, 'MESSG'||UTL_TCP.CRLF); --Added By me

 utl_smtp.data(c,'From: Oracle Database' || utl_tcp.crlf ||
                        'To: ' || msg_to || utl_tcp.crlf ||
                        'Subject: ' || msg_subject ||
                         utl_tcp.crlf || msg_text);
 utl_smtp.quit(c);

 EXCEPTION
   WHEN OTHERS THEN
          IF SMTP_INVALID_OPERATION(SQLERRM) THEN
         DBMS_OUTPUT.PUT_LINE('Invalid Operation in Mail attempt using UTL_SMTP.');
         ELSIF SMTP_TRANSIENT_ERROR(SQLERRM) THEN
              DBMS_OUTPUT.PUT_LINE(' Temporary e-mail issue - try again'); 
         ELSIF SMTP_PERMANENT_ERROR(SQLERRM) THEN
              DBMS_OUTPUT.PUT_LINE(' Permanent Error Encountered.');
         ELSE
            DBMS_OUTPUT.PUT_LINE('SQLCODE: '||SQLCODE||' Error Message: '||SQLERRM);
        END IF;
END;


EDB-SPL Procedure successfully completed
edb=# exec send_mail(msg_to=>'vibhor.aim@gmail.com', 
edb(#                msg_subject => 'Hello from PPAS', 
edb(#                msg_text    => 'This is the body of the message');
Invalid Operation in Mail attempt using UTL_SMTP.

EDB-SPL Procedure successfully completed

Example 2: In Oracle (UTL_SMTP.TRANSIENT_ERROR ):

CREATE OR REPLACE PROCEDURE SEND_MAIL (
 msg_to      varchar2,
 msg_subject varchar2,
 msg_text    varchar2   )
IS
 c  utl_smtp.connection;
 rc integer;
 msg_from    varchar2(50) := 'Oracle9.2';
 mailhost    VARCHAR2(30) := '192.168.23.25';   -- local database host

BEGIN
 c := utl_smtp.open_connection(mailhost, 25); -- SMTP on port 25    
 utl_smtp.helo(c, mailhost);
 utl_smtp.mail(c, msg_from);
 utl_smtp.rcpt(c, msg_to);
utl_smtp.write_data(c, 'MESSG'||UTL_TCP.CRLF); --Added By me

 utl_smtp.data(c,'From: Oracle Database' || utl_tcp.crlf ||
                        'To: ' || msg_to || utl_tcp.crlf ||
                        'Subject: ' || msg_subject ||
                         utl_tcp.crlf || msg_text);
 utl_smtp.quit(c);

 EXCEPTION
   WHEN UTL_SMTP.INVALID_OPERATION THEN
      dbms_output.put_line(' Invalid Operation in Mail attempt    
                             using UTL_SMTP.');
   WHEN UTL_SMTP.TRANSIENT_ERROR THEN
      dbms_output.put_line(' Temporary e-mail issue - try again'); 
   WHEN UTL_SMTP.PERMANENT_ERROR THEN
      dbms_output.put_line(' Permanent Error Encountered.'); 
END;



SQL> exec send_mail(msg_to=>'vibhor.aim@gmail.com', -
              msg_subject => 'Hello from Oracle', -
              msg_text    => 'This is the body of the message'-
           );> > > 
Temporary e-mail issue - try again

PL/SQL procedure successfully completed.

In PPAS:


CREATE OR REPLACE PROCEDURE SEND_MAIL (
 msg_to      varchar2,
 msg_subject varchar2,
 msg_text    varchar2   )
IS
 c  utl_smtp.connection;
 rc integer;
 msg_from    varchar2(50) := 'PPAS 9.0';
 mailhost    VARCHAR2(30) := '192.168.23.25';   -- local database host

BEGIN
 c := utl_smtp.open_connection(mailhost, 25); -- SMTP on port 25    
 utl_smtp.helo(c, mailhost);
 utl_smtp.mail(c, msg_from);
 utl_smtp.rcpt(c, msg_to);
utl_smtp.write_data(c, 'MESSG'||UTL_TCP.CRLF); --Added By me

 utl_smtp.data(c,'From: Oracle Database' || utl_tcp.crlf ||
                        'To: ' || msg_to || utl_tcp.crlf ||
                        'Subject: ' || msg_subject ||
                         utl_tcp.crlf || msg_text);
 utl_smtp.quit(c);

 EXCEPTION
   WHEN OTHERS THEN
          IF SMTP_INVALID_OPERATION(SQLERRM) THEN
         DBMS_OUTPUT.PUT_LINE('Invalid Operation in Mail attempt using UTL_SMTP.');
         ELSIF SMTP_TRANSIENT_ERROR(SQLERRM) THEN
              DBMS_OUTPUT.PUT_LINE(' Temporary e-mail issue - try again'); 
         ELSIF SMTP_PERMANENT_ERROR(SQLERRM) THEN
              DBMS_OUTPUT.PUT_LINE(' Permanent Error Encountered.');
         ELSE
            DBMS_OUTPUT.PUT_LINE('SQLCODE: '||SQLCODE||' Error Message: '||SQLERRM);
        END IF;
END;


edb=# exec send_mail(msg_to=>'vibhor.aim@gmail.com', 
edb(#                msg_subject => 'Hello from PPAS', 
edb(#                msg_text    => 'This is the body of the message');
Temporary e-mail issue - try again

EDB-SPL Procedure successfully completed
edb=# 

Have fun!

sessionwatch: A small tool to monitor session activities of PostgreSQL

Generally DBA, has to login in system to monitor the activities by querying the pg_stat_activity veiw. However if someone wants to do continous monitor for every 2 seconds. He has to run SQL against pg_stat_activity every 2 seconds.

So, For such activity, I have made following shell script which can be use with watch command on linux to monitor the activities.


#!/bin/bash

## Prepared by Vibhor Kumar (EDB).

Q="select procpid,now() - query_start as long, waiting, current_query from pg_stat_activity where procpid pg_backend_pid() limit 10;"
while getopts "b:U:d:p:" opt; do
 case $opt in
 b) BIN=$OPTARG;;
 U) PGUSER=$OPTARG;;
 d) DB=$OPTARG;;
 p) port1=$OPTARG;;
 *) echo "Usage: $0 -b -d -U -mp -sp ";;
 esac
 done

if [ -z $BIN ];then
 echo "Usage: $0 -b bin directory -d database name -U user -p port"
 exit
fi
if [ -z $PGUSER ];then
 echo "Usage: $0 -b bin directory -d database name -U user -p port"
 exit
fi

if [ -z $DB ];then
 echo "Usage: $0 -b bin directory -d database name -U user -p port"
 exit
fi
if [ -z $port1 ];then
 echo "Usage: $0 -b bin directory -d database name -U user -p port"
 exit
fi

function verifybin_connect()
{
 if [ -f $BIN/psql ];then
 :
 else
 echo "ERROR: psql Not Found!"
 usage
 exit 1
 fi
 Q1="SELECT 'ping';"
 $BIN/psql -U $PGUSER -h localhost -p $port1 -c "$Q1" -d $DB -t >/dev/null 2>/dev/null
 if [ $? -ne 0 ];then
 echo "ERROR: $DB is not pinging on $host"
 exit 1
 fi
}

verifybin_connect
$BIN/psql -c "$Q" -U $PGUSER -d $DB -p $port1

Usage is given below:

vibhore@ubuntu:~$ ./sessionwatch
Usage: ./sessionwatch -b bin directory -d database name -U user -p port

Example of monitoring the activity is given below:

watch ./sessionwatch -d edb -U enterprisedb -p 5444 -b /opt/PostgresPlus/8.4AS/bin
Every 2.0s: ./sessionwatch -d edb -U enterprisedb -p 5444 -b /opt/PostgresPlus/8.4AS/bin Sun Apr 24 09:25:50 2011

procpid | long | waiting | current_query
---------+------------------------+---------+---------------
 1292 | 00:00:25.582061 | f |
 1314 | 03:13:59.158917 | f |
 1172 | 00:00:04.899938 | f |
 1324 | 08:18:58.855179 | f |
 1330 | 00:00:18.448067 | f |
 1819 | 2 days 06:11:09.667475 | f |
 1836 | 00:00:00.637558 | f |
(7 rows)