Switchover/Switchback in PostgreSQL 9.3

30 Jun

PostgreSQL 9.3 has two key software updates making switchover/switchback easier in High Availability configurations.

First, let’s address the software patches and their descriptions:
1. First patch was committed by Fujii Masao.
Patch commit# 985bd7d49726c9f178558491d31a570d47340459

With this patch, the walsender process tries to send all outstanding WAL records to the standby in replication when the user shuts down the master.

This means:
a. All WAL records are synced between two servers after the clean shutdown of the master
b. After promoting the standby to new master, the user can restart the stopped master as new standby without a fresh backup from new master.

2. Second patch was committed by Heikki Linnakangas in PostgreSQL 9.3.
Patch commit# abfd192b1b5ba5216ac4b1f31dcd553106304b19

Before PostgreSQL version 9.3, streaming replication used to stop replicating if the timeline on the primary didn’t match the standby. This generally happens when the user promotes one of the standbys to become the new master. Promoting a standby always results in an increment of timeline ID and after that increment, other standbys will refuse to continue replicating.

With this patch in PostgreSQL 9.3, the standby asks the primary for any timeline history files that are missing from the standby when it connects – if the standby recovery.conf file has the following setting:
recovery_target_timeline=’latest’

The missing files are sent using a new replication command TIMELINE_HISTORY, and stored in the standby’s pg_xlog directory. Using the timeline history files, the standby can follow the latest timeline present in the primary, just as it can follow new timelines appearing in an archive WAL directory.

Because of above patches, if the user performs the following sequence of steps then switchover/switchback can be easily achieved:
To switchover from Master to Standby, use following steps:
On Master:
1. Use any one of following stop options for clean shutdown.

    pg_ctl stop --pgdata=data_dire --mode=fast
    or
    pg_ctl stop --pgdata=data_directory --mode=smart
    

2. Before promoting standby:

Make sure all WAL send by Master applied on Standby. Use following functions to verify it:
    * pg_last_xlog_receive_location()
    * pg_last_xlog_replay_location()

3. Have the proper archive location and archive_command setting accessible to old Master.

Following is a presentation I delivered at a recent BPUG Meet (Boston PostgreSQL User Group) to discuss switchover/switchback.

 

Monitoring approach for Streaming Replication with Hot Standby in PostgreSQL 9.3.

21 May

The people using PostgreSQL and the Streaming Replication feature seem to ask many of the same questions:

1. How best to monitor Streaming Replication?

2. What is the best way to do that?

3. Are there alternatives, when monitoring on Standby, to using the pg_stat_replication view on Master?

4. How should I calculate replication lag-time, in seconds, minutes, etc.?

In light of these commonly asked questions, I thought a blog would help. The following are some methods I’ve found to be useful.

Monitoring is critical for large infrastructure deployments where you have Streaming Replication for:

1. Disaster recovery

2. Streaming Replication is for High Availability

3. Load balancing, when using Streaming Replication with Hot Standby

PostgreSQL has some building blocks for replication monitoring, and the following are some important functions and views which can be use for monitoring the replication:

1. pg_stat_replication view on master/primary server.

This view helps in monitoring the standby on Master. It gives you the following details:

   pid:              Process id of walsender process
   usesysid:         OID of user which is used for Streaming replication.
   usename:          Name of user which is used for Streaming replication
   application_name: Application name connected to master
   client_addr:      Address of standby/streaming replication
   client_hostname:  Hostname of standby.
   client_port:      TCP port number on which standby communicating with WAL sender
   backend_start:    Start time when SR connected to Master.
   state:            Current WAL sender state i.e streaming
   sent_location:    Last transaction location sent to standby.
   write_location:   Last transaction written on disk at standby
   flush_location:   Last transaction flush on disk at standby.
   replay_location:  Last transaction flush on disk at standby.
   sync_priority:    Priority of standby server being chosen as synchronous standby
   sync_state:       Sync State of standby (is it async or synchronous).

e.g.:

postgres=# select * from pg_stat_replication ;
-[ RECORD 1 ]----+---------------------------------
pid              | 1114
usesysid         | 16384
usename          | repuser
application_name | walreceiver
client_addr      | 172.17.0.3
client_hostname  |
client_port      | 52444
backend_start    | 15-MAY-14 19:54:05.535695 -04:00
state            | streaming
sent_location    | 0/290044C0
write_location   | 0/290044C0
flush_location   | 0/290044C0
replay_location  | 0/290044C0
sync_priority    | 0
sync_state       | async

2. pg_is_in_recovery() : Function which tells whether standby is still in recovery mode or not.

e.g.

postgres=# select pg_is_in_recovery();
 pg_is_in_recovery
-------------------
 t
(1 row)

3. pg_last_xlog_receive_location: Function which tells location of last transaction log which was streamed by Standby and also written on standby disk.

e.g.

postgres=# select pg_last_xlog_receive_location();
 pg_last_xlog_receive_location
-------------------------------
 0/29004560
(1 row)

4. pg_last_xlog_replay_location: Function which tells last transaction replayed during recovery process. e.g is given below:

postgres=# select pg_last_xlog_replay_location();
 pg_last_xlog_replay_location
------------------------------
 0/29004560
(1 row)

5. pg_last_xact_replay_timestamp: This function tells about the time stamp of last transaction which was replayed during recovery. Below is an example:

postgres=# select pg_last_xact_replay_timestamp();
  pg_last_xact_replay_timestamp
----------------------------------
 15-MAY-14 20:54:27.635591 -04:00
(1 row)

Above are some important functions/views, which are already available in PostgreSQL for monitoring the streaming replication.

So, the logical next question is, “What’s the right way to monitor the Hot Standby with Streaming Replication on Standby Server?”

If you have Hot Standby with Streaming Replication, the following are the points you should monitor:

1. Check if your Hot Standby is in recovery mode or not:

For this you can use pg_is_in_recovery() function.

2.Check whether Streaming Replication is working or not.

And easy way of doing this is checking the pg_stat_replication view on Master/Primary. This view gives information only on master if Streaming Replication is working.

3. Check If Streaming Replication is not working and Hot standby is recovering from archived WAL file.

For this, either the DBA can use the PostgreSQL Log file to monitor it or utilize the following functions provided in PostgreSQL 9.3:

pg_last_xlog_replay_location();
pg_last_xact_replay_timestamp();

4. Check how far off is the Standby from Master.

There are two ways to monitor lag for Standby.



   i. Lags in Bytes: For calculating lags in bytes, users can use the pg_stat_replication view on the master with the function pg_xlog_location_diff function. Below is an example:

pg_xlog_location_diff(pg_stat_replication.sent_location, pg_stat_replication.replay_location)

which gives the lag in bytes.

  ii. Calculating lags in Seconds. The following is SQL, which most people uses to find the lag in seconds:

   SELECT CASE WHEN pg_last_xlog_receive_location() = pg_last_xlog_replay_location()
                 THEN 0
               ELSE EXTRACT (EPOCH FROM now() - pg_last_xact_replay_timestamp())
          END AS log_delay;

Including the above into your repertoire can give you good monitoring for PostgreSQL.

I will in a future post include the script that can be used for monitoring the Hot Standby with PostgreSQL streaming replication.

Write Operation: MongoDB Vs PostgreSQL 9.3 (JSON)

15 May

PostgreSQL 9.3  has lot of new improvement like the addition of new operators for JSON data type in postgreSQL, that prompted me to explore its features for NoSQL capabilities.

MongoDB is one of NoSQL solutions that have gotten a great deal of attention in the NoSQL market. So, this time I thought to do some benchmarking with the NoSQL capability of JSON in MongoDB and the JSON datatype in PostgreSQL 9.3

For this benchmark, I have used the same machine with no optimization in installation of PostgreSQL and MongoDB (since I wanted to see how things work, out of box with default installation). And I used the sample data from MongoDB’s site, around which I had developed the functions which can generate random data using the same sample for Mongo and for PostgreSQL.

 

In this benchmarking, I have verified following:

1. PostgreSQL COPY Vs Mongo-Import

2. Data Disk Size of PostgreSQL and Mongo for same amount of data.

3. PostgreSQL INSERT Vs Mongo Insert

Some specification before I would display the result:

1. Operating System: CentOS 6.5, 64 bit.

2. Total Memory: 1.538 GB

3. MongoDB version: 2.4.9

4. PostgreSQL: 9.3

Below is the results which I have got:

For Bulkload (COPY Vs MongoImport):

# of rows         1000      10000      100000      1000000
mongo-import (ms) 86.241679 569.761325 6940.837053 68610.69793
PG COPY (ms)      27.36344  176.705094 1769.641917 24801.23291

 

Disk space utilization:


# of rows        1000      10000     100000      1000000
mongo disks (mb) 208       208       208.2033236 976
pg size (mb)     0.3515625 3.2890625 32.71875    326.8984375

For INSERTs:


# of Inserts        1000        10000       100000      1000000
MONGO INSERTS (sec) 0.521397404 4.578372454 43.92753611 449.4023542
PG INSERTS (sec)    0.326254529 4.169742939 32.21799302 319.2562722

 

If you look at above stats, you can see PostgreSQL JSON is much better in bulk loading and INSERTs.

Best thing is that it takes less space than MongoDB and doesn’t eat up much disk space.

Postgres Plus Advanced Server 9.3 Features

21 Nov

Postgres Plus Advanced Server 9.3, released on Tuesday for general availability, contains all of the innovation that was part of PostgreSQL 9.3, which was released in September.
These features include the following:

1. Configuration directive ‘include_dir’
2. COPY FREEZE for more efficient bulk loading
3. Custom Background Workers
4. Data Checksums
5. JSON: Additional functionality
6. LATERAL JOIN
7. Parallel pg_dump for faster backups
8. ‘pg_isready’ server monitoring tool
9. Switch to Posix shared memory and mmap()
10. Event Triggers
11. VIEW Features:
                   Materialized Views
                   Recursive View Syntax
                   Updatable Views
12. Writeable Foreign Tables
                   postgres_fdw
13. Replication Improvements
                    Streaming-Only Remastering
                    Fast Failover
                    Architecture-Independent Streaming
                    pg_basebackup conf setup

For Postgres Plus Advanced Server, we integrated into the core PostgreSQL additional performance improvements, new packages and Object oriented features so that our database can address a wider range of enterprise use cases.

Partitioning enhancements to boost performance for INSERTS/UPDATES/SELECT was a major development for Postgres Plus. Below are graphs illustrating the performance increases of Postgres Plus Advanced Server 9.3 compared to the 9.2 version.

TPS_Select

Figure: TPS SELECT

TPS_Update

 Figure TPS UPDATE

The release features some important functions that can make a developer’s life easier, such as:

1. DBMS_RANDOM package. This packages helps users to create Random numbers, Random strings and Random dates. PostgreSQL supports random functions, which enable users to have their own function on top of random for random string and random date.

With this package, users can easily use the built-in functions for those two purposes. Below are some examples:
DBMS_RANDOM package can be used to easily generate random strings and dates, and users don’t need to make their own wrapper function for these two activities. Below is one simple example of using it.

user=# BEGIN
user$#   DBMS_OUTPUT.put_line('Run 1 : seed=0');
user$#   DBMS_RANDOM.seed (val => 0);
user$#   FOR i IN 1 ..5 LOOP
user$#     DBMS_OUTPUT.put_line('i=' || i || ' : value=' || DBMS_RANDOM.value(low => 1, high => 10));
user$#   END LOOP;
user$#
user$#   DBMS_OUTPUT.put_line('Run 2 : seed=0');
user$#   DBMS_RANDOM.seed (val => 0);
user$#   FOR i IN 1 ..5 LOOP
user$#     DBMS_OUTPUT.put_line('i=' || i || ' : value=' || DBMS_RANDOM.value(low => 1, high => 10));
user$#   END LOOP;
user$#
user$# END;
Run 1 : seed=0
i=1 : value=6.756044853478672
i=2 : value=7.806749390438203
i=3 : value=1.1173334638588129
i=4 : value=5.221515491604802
i=5 : value=8.445010517258194
Run 2 : seed=0
i=1 : value=6.756044853478672
i=2 : value=7.806749390438203
i=3 : value=1.1173334638588129
i=4 : value=5.221515491604802
i=5 : value=8.445010517258194

EDB-SPL Procedure successfully completed
user=# BEGIN
user$#   FOR i IN 1 .. 5 LOOP
user$#     DBMS_OUTPUT.put_line('string(''x'',10)= ' || DBMS_RANDOM.string('x',10));
user$#   END LOOP;
user$# END;
string('x',10)= 1TT23XR8X2
string('x',10)= DO5D2KUUVD
string('x',10)= AGNPAXDECT
string('x',10)= 7JC6RMU9KX
string('x',10)= 13BW6JM6KN

EDB-SPL Procedure successfully completed
user=# BEGIN
user$#   FOR i IN 1 .. 5 LOOP
user$#     DBMS_OUTPUT.put_line('date= ' || TRUNC(SYSDATE + DBMS_RANDOM.value(0,366)));
user$#   END LOOP;
user$# END;
date= 20-JUN-14 00:00:00
date= 26-MAY-14 00:00:00
date= 11-JAN-14 00:00:00
date= 27-JUN-14 00:00:00
date= 21-DEC-13 00:00:00

EDB-SPL Procedure successfully completed

2. DBMS_LOCK.sleep: Similar to pg_sleep function, the DBMS_LOCK.sleep package is meant for Oracle users/developers who are more familiar with Oracle packages.

3. DBMS_CRYPTO: DMBMS_CRYPTO is new in Postgres Plus Advanced Server 9.3 and it provides the interface to encrypt and decrypt stored data, and can be used in conjunction with PL/SQL programs for running network communications. It provides support for several industry-standard encryption and hashing algorithms.

DECLARE
   input_string       VARCHAR2 (200) :=  'Secret Message';
   output_string      VARCHAR2 (200);
   encrypted_raw      RAW (2000);             -- stores encrypted binary text
   decrypted_raw      RAW (2000);             -- stores decrypted binary text
   num_key_bytes      NUMBER := 256/8;        -- key length 256 bits (32 bytes)
   key_bytes_raw      RAW (32);               -- stores 256-bit encryption key
   encryption_type   INTEGER :=          -- total encryption type
                            DBMS_CRYPTO.ENCRYPT_DES
                          + DBMS_CRYPTO.CHAIN_CBC
                          + DBMS_CRYPTO.PAD_PKCS5;
BEGIN
   DBMS_OUTPUT.PUT_LINE ( 'Original string: ' || input_string);
   key_bytes_raw := DBMS_CRYPTO.RANDOMBYTES (num_key_bytes::INTEGER);
   encrypted_raw := DBMS_CRYPTO.ENCRYPT
      (
         src => convert_to(input_string, 'LATIN1'),
         typ => encryption_type,
         key => key_bytes_raw
      );
    -- The encrypted value "encrypted_raw" can be used here
   decrypted_raw := DBMS_CRYPTO.DECRYPT
      (
         src => encrypted_raw,
         typ => encryption_type,
         key => key_bytes_raw
      );
   output_string := convert_from(decrypted_raw,'LATIN1');

   DBMS_OUTPUT.PUT_LINE ('Decrypted string: ' || output_string);
END;

4. DBMS_SCHEDULER: With Postgres Plus 9.2, we got DBMS_JOB package, and now Postgres Plus 9.3 has come out with DBMS_SCHEDULER, which has more control options for jobs and better visibility of scheduled jobs.
5. UTL_ENCODE: This is another important package for keeping sensitive information in the database. Using this package, users can easily encode and decode their data and keep it in database. This package can be utilized in new applications, and gives flexibility around important encode/decode functions for developers and users where data is very important. This is particularly useful for users working applications that were written for Oracle but migrated onto Postgres Plus Advanced Server. .

Let’s see an example:

user=# DECLARE
user-#     v_str VARCHAR2(100);
user$#   BEGIN
user$#      --generate encoded value
user$#     v_str := utl_encode.text_encode('EnterpriseDB','gb18030', UTL_ENCODE.BASE64);
user$#     dbms_output.put_line('Encoded string => '||v_str);
user$#
user$#      --take the encoded value and decode it
user$#    v_str := utl_encode.text_decode(v_str,'gb18030', UTL_ENCODE.BASE64);
user$#    dbms_output.put_line('Decoded string => '||v_str);
user$# END;
Encoded string => RW50ZXJwcmlzZURC
Decoded string => EnterpriseDB

EDB-SPL Procedure successfully completed

6. UTL_HTTP: Advanced Server 9.3 has an UTL_HTTP package. This package provides functions for HTTP callouts from SQL/SPL. Developers/users can use this package and associated functions for accessing data on the Internet over HTTP.

7. UTL_URL: This is one of additions we developed for Postgres Plus 9.3. This package has two functions, escape and unescape mechanisms for URL characters. The escape function helps to escape a URL before the URL can be used to fetch data from a website. The unescape function can unescape any escaped character used in the URL, before fetching the data from a website. These two packages with UTL_HTTP allow users to direct the fetching of data from a website without having to write complex code in the application for handling specific data from a website.

8. New in EDB*loader. In Postgres Plus 9.3 EDB*loader has more control options for bulk loading.
a. ROWS parameter: prior to version 9.3, EDB*Loader processed entire data files as a single transaction. With parameter, users can control the processing of large amounts of data after which COMMIT needs to be executed. More control for processing/Loading data files.
b. Error Codes: EDB*loader now supports some additional exit/error codes, which will help users/developers include proper exit code checking while using a higher speed bulk data loader with parallel processing. Exit codes are shown below:

      0: Success
      1: Failure
      2: Warning
      3: Fatal

c. Streaming output files to client: Prior to version 9.3, users had to check the logfile of EDB*loader on the server side. Files: logfile, Bad file and discard file used to be created on the server side, and for troubleshooting users had to log in on the server to verify these files. Now, Postgres Plus 9.3 enables these files to be created at the client site. Users are no longer required to log into the server to verify/check these files..
d. New GUC for empty string. EDB*loader now has one GUC which users can utilize to control the default behavior of an empty_string in their datafile. edbldr.empty string has the following valid values:
1. null: empty field is treated as a null if the raw field contains no characters or a pair of delimiters with nothing in between.
2. empty_string: empty field is treated as a empty_string, if the raw field contains no characters or a pair of delimiters with nothing in between.
3. pgsql: empty field is treated as a null if the raw field contains no characters, but as an empty string if it contains a pair of delimiters with nothing in between.
Default is pgsql.

9. New REGEXP functions. Postgres Plus 9.3 nowhas three new REGEXP functions for developers.
a. REGEXP_COUNT: This searches a string for a regular expression, and returns a count of the times that the regular expression occurs. Here’s a simple example:

          user=# SELECT REGEXP_COUNT('reinitializing', 'i', 1);
          regexp_count
           --------------
            5
          (1 row)
 

b. REGEXP_INSTR: This function searches a string for a POSIX-style regular expression and returns the position within the string where the match was located.

user=# SELECT REGEXP_INSTR('800-555-1212', '[0-9][0-9][0-9]', 1, 1) ;
 regexp_instr
--------------
            1
(1 row)

c. REGEXP_SUBSTR: This function searches a string for a pattern specified by a POSIX compliant regular expression and returns the string that matches the pattern specified in the call to the function.


user=# SELECT REGEXP_SUBSTR('800-555-1212', '[0-9][0-9][0-9]', 1, 2);
 regexp_substr
---------------
 555
(1 row)

10. New exception codes for UTL_FILE package: The UTL_FILE package provides the capability to read from, and write to files on the operating system’s file system. To provide error control capabilities to developers/users, Postgres Plus 9.3 has added some new exception codes, which users can utilize in their PL/SPL code for better exception handling. One simple example is given below:

user=# CREATE DIRECTORY tempdir_fb22954 AS '/tmp/';
CREATE DIRECTORY
user=# SELECT dirname
user-# FROM edb_dir
user-# WHERE dirname='tempdir_fb22954';
     dirname
-----------------
 tempdir_fb22954
(1 row)

user=#
user=#  -- check "utl_file.invalid_operation" exception
user=# DECLARE v_testfile UTL_FILE.FILE_TYPE;
user$#
user$#  v_directory VARCHAR2(50) := 'tempdir_fb22954';
user$#
user$#  v_filename VARCHAR2(50) := 'test_file_exist.txt';
user$#
user$#  BEGIN -- Create file
user$#  v_testfile := UTL_FILE.FOPEN(v_directory,v_filename,'w');
user$#
user$#  UTL_FILE.PUT(v_testfile,'A');
user$#
user$#  UTL_FILE.NEW_LINE(v_testfile);
user$#
user$#  UTL_FILE.FCLOSE(v_testfile);
user$#
user$#  DBMS_OUTPUT.PUT_LINE('Created file: ' || v_filename);
user$#
user$#  -- It should throw exception because file is open for read.
user$#  v_testfile := UTL_FILE.FOPEN(v_directory,v_filename,'r');
user$#
user$#  UTL_FILE.PUT(v_testfile,'B');
user$#
user$#  UTL_FILE.NEW_LINE(v_testfile);
user$#
user$#  UTL_FILE.PUT(v_testfile,'C');
user$#
user$#  UTL_FILE.NEW_LINE(v_testfile);
user$#
user$#  exception
user$#  WHEN utl_file.invalid_operation THEN
user$# RAISE notice 'exception caught utl_file.invalid_operation : SQLERRM: %',
user$#                                                       sqlerrm;
user$# UTL_FILE.FCLOSE(v_testfile);
user$#  WHEN others THEN
user$# RAISE notice 'exception, others : SQLERRM: %',
user$#                         sqlerrm;
user$# UTL_FILE.FCLOSE(v_testfile);
user$#  END;
Created file: test_file_exist.txt
NOTICE:  exception caught utl_file.invalid_operation : SQLERRM: invalid file operation

EDB-SPL Procedure successfully completed

New in PostgreSQL 9.3: Data Types

12 Aug

In series of writing further on upcoming features in 9.3, I thought about including new improvements coming in data types in PostgreSQL.

1. Increase the maximum length of large objects from 2GB to 4TB

PostgreSQL has support of Large Objects from starting. However the limit of large objects in PostgreSQL was limited to 2GB.

From 9.3 onwards, PostgreSQL can store large objects up to 4TB. Thats happened due to lifting the limitation of API for large object. APIs like lo_seek(),lo_tell cannot return over 2GB offset and main reason was offset parameters defined for these function is of 4 bytes and results length provided by these functions is 4 bytes. If user do the calculation, he can see 2^31-1=2GB, it resulted in 2GB.

To overcome from this limitation, PostgreSQL is coming with new APIs: lo_seek64 and lo_tell64 functions. Libpq interface will check if those lo_tell64/seek64 exits then use it or use the 32 bit of lo_seek/lo_tell functions, this way compatibility has been maintained in PostgreSQL for older release and new upcoming release. This is good add-on for PostgreSQL. Thanks to Tatsuo Ishii (pgpool developer and developer for this add-on).

2. Text timezone designations using ISO “T” function for timestamptz

This is new for timestamtz. In 9.3, text timezone designation is allowed using ISO “T” format. This was not working in pre-9.3. Lets look at one example:

In pre-9.3


worktest=# select '2011-08-29T09:11:14.123 America/Chicago'::timestamptz;
ERROR:  invalid input syntax for type timestamp with time zone: "2011-08-29T09:11:14.123 America/Chicago"
LINE 1: select '2011-08-29T09:11:14.123 America/Chicago'::timestampt...

As user can see, pre-9.3 has complained about it. However in In 9.3 this has been fixed.

pgsqltest=# select '2011-08-29T09:11:14.123 America/Chicago'::timestamptz;
        timestamptz         
----------------------------
 2011-08-29 10:11:14.123-04
(1 row)

Good to see this fix.

3. New operators and Functions for JSON data strings

PostgreSQL 9.3, is coming withe some new functions and operators for JSON data types, which is add-ons for users who uses JSON data type in their application. Now, they can explore new functions and operators for their use case.
Pre-9.3, had following functions:

array_to_json
row_to_json

In 9.3, we have following new operators:

Operator Right Operand Type Description Example
-> int Get JSON array element '[1,2,3]'::json->2
-> text Get JSON object field '{"a":1,"b":2}'::json->'b'
->> int Get JSON array element as text '[1,2,3]'::json->>2
->> text Get JSON object field as text '{"a":1,"b":2}'::json->>'b'
#> array of text Get JSON object at specified path '{"a":[1,2,3],"b":[4,5,6]}'::json#>'{a,2}'
#>> array of text Get JSON object at specified path as text '{"a":[1,2,3],"b":[4,5,6]}'::json#>>'{a,2}'

And following new functions:

Function Return Type Example
to_json(anyelement) json to_json('Fred said
"Hi."'::text)
json_array_length(json) int json_array_length('[1,2,3,{"f1":1,"f2":[5,6]},4]')
json_each(json) SETOF key text, value json select * from
json_each('{"a":"foo", "b":"bar"}')
json_each_text(from_json
json)
SETOF key text, value text select * from
json_each_text('{"a":"foo", "b":"bar"}')
json_extract_path(from_json json,
VARIADIC path_elems text[])
json json_extract_path('{"f2":{"f3":1},"f4":{"f5":99,"f6":"foo"}}','f4')
json_extract_path_text(from_json
json, VARIADIC path_elems text[])
text json_extract_path_text('{"f2":{"f3":1},"f4":{"f5":99,"f6":"foo"}}','f4',
'f6')
json_object_keys(json) SETOF text json_object_keys('{"f1":"abc","f2":{"f3":"a",
"f4":"b"}}')
json_populate_record(base
anyelement, from_json json, [, use_json_as_text
bool=false]
anyelement select * from
json_populate_record(null::x, '{"a":1,"b":2}')
json_populate_recordset(base
anyelement, from_json json, [, use_json_as_text
bool=false]
SETOF anyelement select * from
json_populate_recordset(null::x,
'[{"a":1,"b":2},{"a":3,"b":4}]')
json_array_elements(json) SETOF json json_array_elements('[1,true,
[2,false]]')

4. New functions to support hstore to JSON

PostgreSQL 9.3, is also coming with new functions for converting values of hstore to JSON.
Following are new functions with example which is coming in for hstore to JSON data types.

Function Return Type Description Example Result
hstore_to_json(hstore) json get hstore as a json value hstore_to_json('"a key"=>1,
b=>t, c=>null, d=>12345, e=>012345,
f=>1.234, g=>2.345e+4')
{"a key": "1", "b": "t", "c":
null, "d": "12345", "e": "012345", "f": "1.234", "g":
"2.345e+4"}
hstore_to_json_loose(hstore) json get hstore as a json value, but attempting to distinguish
numerical and Boolean values so they are unquoted in
the JSON
hstore_to_json_loose('"a
key"=>1, b=>t, c=>null, d=>12345,
e=>012345, f=>1.234, g=>2.345e+4')
{"a key": 1, "b": true, "c":
null, "d": 12345, "e": "012345", "f": 1.234, "g":
2.345e+4}

New in PostgreSQL 9.3: New in Functions

4 Aug

In the series of blogging about new features in 9.3, today, I thought about blogging new functions and improvements coming in PostgreSQL.

Lets look whats new in 9.3, in terms of in build functions:

1. New in one array functions for one dimensional array

PostgreSQL 9.3, is coming with new functions which can help users to manipulate one dimensional arrays by calling simple functions at the place of crafting their own functions and following some methods to do the modification in it.

i. array_remove function

This is a new function added in 9.3, which provides ability for removing the elements from array. Function takes two arguments:
a. One dimensional array from which user wants to remove elements
b. element value which user wants to remove
.

Syntax of this function is given below:

ARRAY_REMOVE(<one dimensional array>, element)

Example of array_remove is given below:

postgres=# select array_remove(ARRAY['First','Second','Delete','Four'],'Delete');
    array_remove     
---------------------
 {First,Second,Four}
(1 row)

ii. array_replace function

This is a new in 9.3, this helps user to replace any element in array. This function can be use multi-dimensional array Or for single dimensional array.
Example of array_replace is given below:

pgsqltest=# select array_replace(ARRAY[ARRAY[1,2],ARRAY[3,4],ARRAY[56,6],ARRAY[7,8]],56,5);
       array_replace       
---------------------------
 {{1,2},{3,4},{5,6},{7,8}}
(1 row)

2. VARIADIC-labeled arguments expansion for concat and format functions

This is new addition to concat and format function. pre-9.3 concat and format, function used to ignore VARIADIC label and doesn’t used to print right output. However, this has been fixed in 9.3. Lets look at the example of pre-9.3 and in 9.3

In pre-9.3

worktest=# select concat(variadic array[1,2,3]);
 concat  
---------
 {1,2,3}
(1 row)

In 9.3

pgsqltest=# select concat(variadic array[1,2,3]);
 concat 
--------
 123
(1 row)

You can see above in 9.3, using VARIADIC label working properly.

Lets look at the format() function:

In pre-9.3

worktest=# select format(string_agg('%s',','), variadic array_agg(i)) from generate_series(1,3) g(i);
ERROR:  too few arguments for format

woow, its error out. However, this is fixed in 9.3 :-).

In 9.3

pgsqltest=# select format(string_agg('%s',','), variadic array_agg(i)) from generate_series(1,3) g(i);
 format 
--------
 1,2,3
(1 row)

3. Improvement in format() function to handle field width and left/right alignment

This new added in 9.3 format() function, which is going to increase the usability of format() function for developers. In 9.3, format function is coming with following enhancements:
i). Proper handling of field width.
ii). Proper handling of Left/right alignment.

Above two improvement was missing in pre-9.3. However, addition of above improvement made format() function to be follower of sprintf() C function.

Lets look at the example:
In pre-9.3, using of field width resulted into error message, as given below

ERROR:  unterminated conversion specifier
worktest=#  select format('>>%10s<<', 'Hello');
ERROR:  unterminated conversion specifier
worktest=# 

However, 9.3 is coming with proper field width support.

pgsqltest=#  select format('>>%10s<<', 'Hello');
     format     
----------------
 >>     Hello<<
(1 row)

9.3 is also coming proper handling left/right alignment. Examples are given below:
Left alignment

pgsqltest=# select format('>>%-10s<<', 'Hello');
     format     
----------------
 >>Hello     <<
(1 row)

Right alignment

pgsqltest=# select format('>>%1$10s<<', 'Hello');
     format     
----------------
 >>     Hello<<
(1 row)

4. Proper handling of Negative century in to_char, to_date and to_timestamp functions

In pre-9.3, following function behavior for negative century was wrong or inconsistent
a. to_char
b. to_date
c. to_timestamp.

However, 9.3 is coming with proper fix/handling for negative century. Lets look at the output in pre-9.3 and in 9.3 for each functions.

pre-9.3

worktest=# select to_timestamp('4713-01-08 BC','YYYY-DD-MM bc');
         to_timestamp         
------------------------------
 01-AUG-13 00:00:00 -04:56:02
(1 row)

Above you can see that its displaying wrong result for BC. In 9.3

pgsqltest=# select to_timestamp('4713-01-08 BC','YYYY-DD-MM bc');
          to_timestamp           
---------------------------------
 4713-08-01 00:00:00-04:56:02 BC
(1 row)

Lets see for to_date functions. In 9.2/pre-9.3

worktest=# SELECT to_date('4713-01-08 BC','YYYY-DD-MM bc');
ERROR:  full year must be between -4712 and +9999, and not be 0
worktest=# 

In 9.3

pgsqltest=# SELECT to_date('4713-01-08 BC','YYYY-DD-MM bc');
    to_date    
---------------
 4713-08-01 BC
(1 row)

Above, you can see in 9.3,to_date function is working right. However to_date in 9.2 didn’t able to handle it. Similar behavior you can see for to_char function.

5. Improvement in pg_get_viewdef() to print new line after each SELECT and FROM entry

This is kind of enhancement made in pg_get_viewdef function of 9.3. And user will be able to see this enhancement in pg_dump plain dump too.

This enhancement is more like readability of output of view definition in 9.3 and reducing the line length of view definition. Lets look at how this improvement makes user experience of getting/viewing view definition much better.

Before 9.3 or in pre-9.3

worktest=# select pg_get_viewdef('pg_tables'::regclass);
                                                                                                    
                                                                                  pg_get_viewdef    
                                                                                                    
                                                                              
----------------------------------------------------------------------------------------------------
----------------------------------------------------------------------------------------------------
----------------------------------------------------------------------------------------------------
------------------------------------------------------------------------------
 SELECT n.nspname AS schemaname, c.relname AS tablename, pg_get_userbyid(c.relowner) AS tableowner, 
t.spcname AS tablespace, c.relhasindex AS hasindexes, c.relhasrules AS hasrules, c.relhastriggers AS
 hastriggers FROM ((pg_class c LEFT JOIN pg_namespace n ON ((n.oid = c.relnamespace))) LEFT JOIN pg_
tablespace t ON ((t.oid = c.reltablespace))) WHERE (c.relkind = 'r'::"char");
(1 row)

Above you can see that pg_get_viewdef has printed defintion of view in one line, which not only make readability of view definition difficult. However also increase the line length.

Lets look at the pg_get_viewdef in 9.3

pgsqltest=# select pg_get_viewdef('pg_tables'::regclass);
                        pg_get_viewdef                        
--------------------------------------------------------------
  SELECT n.nspname AS schemaname,                            +
     c.relname AS tablename,                                 +
     pg_get_userbyid(c.relowner) AS tableowner,              +
     t.spcname AS tablespace,                                +
     c.relhasindex AS hasindexes,                            +
     c.relhasrules AS hasrules,                              +
     c.relhastriggers AS hastriggers                         +
    FROM ((pg_class c                                        +
    LEFT JOIN pg_namespace n ON ((n.oid = c.relnamespace)))  +
    LEFT JOIN pg_tablespace t ON ((t.oid = c.reltablespace)))+
   WHERE (c.relkind = 'r'::"char");
(1 row)

which seems me better in terms of readability and doesn’t have long line.

Enjoy!!

New in PostgreSQL 9.3: Server Side languages

30 Jul

In series of blogging about new features coming in PostgreSQL 9.3, I thought to blog about the server side language improvements in PostgreSQL. Lets see whats coming in server side language.

As PostgreSQL user, you know, PostgreSQL supports multiple server side language. In 9.3, there are some interesting features are coming. Lets look at the new improvements about to come.

1. SPI access to number of rows processed by COPY command.

This is more like new feature which is introduced in 9.3. Before 9.3, this feature was missing. i.e if user uses COPY command inside the plperl/plpython functions, then there was no way inside the procedure to trace the number of rows processed by COPY. However in 9.3, this limitation is no more exists with procedural language. There are many languages supported in PostgreSQL, However I chose to test this with mostly used language plperl and plpython. Below are some snapshot pre-9.3 and in 9.3.

Lets check with plperl. Following is a plperl function which can be use:

Definition of table is given below:

   Table "public.test_copy"
 Column |  Type   | Modifiers 
--------+---------+-----------
 id     | numeric | 

Content of data file:

cat /tmp/test.data
1
2
3
4
5

Following is a plperl function which can be use for testing in pre-9.3 and in 9.3

CREATE OR REPLACE FUNCTION test_copy() RETURNS integer
AS $$
    my $rv = spi_exec_query("COPY test_copy FROM '/tmp/test.data'");
    my $status = $rv->{status};
    my $nrows = $rv->{processed};
    return $nrows;
$$ LANGUAGE plperl;

If we execute COPY command on psql prompt, user will get message like given below:

worktest=# COPY test_copy from '/tmp/test.data';
COPY 5

which shows COPY has processed 5 rows and accordingly loaded in table.

If we use above plperl function in pre-9.3, user will get following result:

worktest=# select split_part(version(),' ',2) as version;
 version  
----------
 9.2.4.10
(1 row)

worktest=# select test_copy();
 test_copy 
-----------
         0
(1 row)

which shows function was not able to get the number of rows processed by COPY command.
However if we use same plperl function in 9.3, we will get following result:

postgres=# select split_part(version(),' ',2) as version;
 version  
----------
 9.3beta2
(1 row)

postgres=# select test_copy();
 test_copy 
-----------
         5
(1 row)

which shows that plperl function in 9.3 is able to get the number of rows processed.

Similarly we can use following plpython function to test this new feature:

CREATE FUNCTION result_copy_test(cmd text) RETURNS int
AS $$
plan = plpy.prepare(cmd)
plpy.info(plan.status()) 
result = plpy.execute(plan)
return result.nrows()
$$ LANGUAGE plpythonu;

pre-9.3

worktest=#  SELECT result_copy_test($$ COPY test_copy FROM '/tmp/test.data' $$);
INFO:  True
CONTEXT:  PL/Python function "result_copy_test"
 result_copy_test 
------------------
                0
(1 row)

which shows function was not able to get the processed rows by COPY command.

In 9.3

postgres=# SELECT result_copy_test($$ COPY test_copy FROM '/tmp/test.data' $$);
INFO:  True
CONTEXT:  PL/Python function "result_copy_test"
 result_copy_test 
------------------
                5
(1 row)

which shows function was able to get the processed rows of COPY command.

2. Allow GET DIAGNOSTICS x = ROW_COUNT to access rows processed by COPY
This is an enhancement in plpgsql, if user wants to access the rows processed by COPY command in plpgsql Block, then he can use
GET DIAGNOSTICS variable = ROW_COUNT, which was not possible before 9.3.

To check with plpgsql, user can use either create a sample function or can use following anonymous block to verify this feature. I will be using DO block of plpgsql to verify this feature.
Example is given below:
PL/pgSQL anonymous block

DO $$
  DECLARE
    r int; 
  BEGIN
     COPY test_copy FROM '/tmp/test.data';
     GET DIAGNOSTICS r = row_count;
     RAISE NOTICE 'processed rows => %',r;
  END;
$$ language plpgsql;

pre-9.3

worktest=# DO $$
worktest$#   DECLARE
worktest$#     r int; 
worktest$#   BEGIN
worktest$#      COPY test_copy FROM '/tmp/test.data';
worktest$#      GET DIAGNOSTICS r = row_count;
worktest$#      RAISE NOTICE 'processed rows => %',r;
worktest$#   END;
worktest$# $$ language plpgsql;
NOTICE:  processed rows => 0
DO

which shows pre-9.3 was not able to get processed rows.

In 9.3

postgres=# DO $$
postgres$#   DECLARE
postgres$#     r int; 
postgres$#   BEGIN
postgres$#      COPY test_copy FROM '/tmp/test.data';
postgres$#      GET DIAGNOSTICS r = row_count;
postgres$#      RAISE NOTICE 'processed rows => %',r;
postgres$#   END;
postgres$# $$ language plpgsql;
NOTICE:  processed rows => 5
DO
postgres=

In 9.3, plpgsql is able to get processed rows. Interesting.

3. Allow use of RETURN with a composite expression in PL/pgSQL.
This is new addition in PL/pgSQL and also useful for user who is reluctant to declare variable too much and wants RETURN in PL/pgSQL to return the expression.

This feature was not available pre-9.3 PL/pgSQL. However, in 9.3, user can use this easily. Below is an example.

create type footype as (x int, y varchar);

create or replace function foo() returns footype as $$
begin
  return (1, 'hello')::footype;
end;
$$ language plpgsql;

Lets try with pre-9.3 first ,
when user will try to create above function user will get following error message in pre-9.3:

worktest=# create or replace function foo() returns footype as $$
worktest$# begin
worktest$#   return (1, 'hello')::footype;
worktest$# end;
worktest$# $$ language plpgsql;
ERROR:  RETURN must specify a record or row variable in function returning row
LINE 3:   return (1, 'hello')::footype;

In 9.3, due to addition of above feature, this works great.

postgres=# create or replace function foo() returns footype as $$
postgres$# begin
postgres$#   return (1, 'hello')::footype;
postgres$# end;
postgres$# $$ language plpgsql;
CREATE FUNCTION
postgres=# 

And user can see the working output too:

postgres=# select foo();
    foo    
-----------
 (1,hello)
(1 row)

4. New in PL/Python
There are some good improvement coming in plpython too. These improvement will be useful for users who like to make plpython and wants to debug the function or wants to print some useful information. This was missing in pre-9.3. Lets look at whats new in plptyhon.

i. Addition of object string in Pl/Python
Before 9.3, whenever user wants to look at the information provided by object handler in Pl/Python, it doesn’t used to give useful information. However, In 9.3, this is going to give some useful information which will be helpful for Pl/Python users.

Lets look at example, how this change is useful. Following is small code which can be use for testing this new addition:

CREATE FUNCTION test_debug_info() RETURNS text AS $$
    try:
        rv=plpy.execute("SELECT datname FROM pg_catalog.pg_database",5)
        plpy.info(rv);
    except plpy.SPIError:
        return "Not working"
    else:
        return "Working good"
$$ LANGUAGE plpythonu;

pre 9.3

worktest=# select test_debug_info() ;
INFO:  <PLyResult object at 0x7f3594b8f270>
CONTEXT:  PL/Python function "test_debug_info"
 test_debug_info 
-----------------
 Working good
(1 row)

Above user can see INFO hasn’t given the very much useful information.

In 9.3

postgres=# select test_debug_info() ;
INFO:  <PLyResult status=5 nrows=3 rows=[{'datname': 'template1'}, {'datname': 'template0'}, {'datname': 'postgres'}]>
CONTEXT:  PL/Python function "test_debug_info"
 test_debug_info 
-----------------
 Working good
(1 row)

Above you can see INFO has given some useful information about status, number rows and rows return by the query. Very useful, if somebody wants to know what handler contains in Pl/Python.

ii. Conversion of OID values to proper Pl/Python numeric type
Before 9.3, Database type OID used to be treated as string in Pl/Python, which makes developers to convert string into number data type in plptyhon and do some processing on it, which is kind of extra coding. However in 9.3 they don’t have to worry any more.
Below is an example to test this feature.

CREATE FUNCTION test_type_conversion_oid(x oid) RETURNS oid AS $$
   plpy.info(x, type(x))
   return x +1
$$ LANGUAGE plpythonu;

Lets look at the pre 9.3

worktest=# select test_type_conversion_oid(123456);
INFO:  ('123456', <type 'str'>)
CONTEXT:  PL/Python function "test_type_conversion_oid"
ERROR:  TypeError: cannot concatenate 'str' and 'int' objects
CONTEXT:  Traceback (most recent call last):
  PL/Python function "test_type_conversion_oid", line 3, in <module>
    return x +1
PL/Python function "test_type_conversion_oid"

which shows direct number operations is not possible with OID in pre9.3, since OID in pre 9.3 is treated as string.

In 9.3

postgres=# select test_type_conversion_oid(123456)
postgres-# ;
INFO:  (123456L, <type 'long'>)
CONTEXT:  PL/Python function "test_type_conversion_oid"
 test_type_conversion_oid 
--------------------------
                   123457
(1 row)

This works great and direct number operation with OID is possible. A good addition in plpython.

iii. Handle SPI errors raised explicitly (with PL/Python’s RAISE) the same as internal SPI errors
This is new addition to plpython. Now in plpython function’s body, user can raise SPIError/exceptions using “raise” statement of plpython, which was missing in pre 9.3. Now in 9.3, user will be able to use “raise” statement to raise exceptions. Addition to this, if user sets the sqlstate attribute, plpython preserver that change.
Below is an example:

Following is definition of function which can be use for testing this addition in pre 9.3 and in 9.3

CREATE OR REPLACE FUNCTION plpy_raise_spiexception() RETURNS void AS $$
   raise plpy.spiexceptions.DivisionByZero()
$$ LANGUAGE plpythonu;

Lets use anonymous block of plpgsql to test how raise exception can be handle.
Before 9.3

DO $$
BEGIN
   SELECT plpy_raise_spiexception();
EXCEPTION WHEN division_by_zero THEN
   RAISE NOTICE 'Found Exception';
END
$$ LANGUAGE plpgsql;

ERROR:  spiexceptions.DivisionByZero: 
CONTEXT:  Traceback (most recent call last):
  PL/Python function "plpy_raise_spiexception", line 2, in <module>
    raise plpy.spiexceptions.DivisionByZero()
PL/Python function "plpy_raise_spiexception"
SQL statement "SELECT plpy_raise_spiexception()"
PL/pgSQL function inline_code_block line 3 at SQL statement

As you can see Before 9.3, user will raise statement was not working properly for raising exception in plpython.

In 9.3

postgres=# DO $$
postgres$# BEGIN
postgres$#    SELECT plpy_raise_spiexception();
postgres$# EXCEPTION WHEN division_by_zero THEN
postgres$#    RAISE NOTICE 'Found Exception';
postgres$# END
postgres$# $$ LANGUAGE plpgsql;
NOTICE:  Found Exception
DO

As you can see in 9.3. it works great!

Follow

Get every new post delivered to your Inbox.