SlideShare una empresa de Scribd logo
1 de 101
Descargar para leer sin conexión
Aakash M,
Database Reliability Engineer,
Mydbops
October 23, 2021
Mydbops 10th MyWebinar
What is new in PostgreSQL 14
Database Reliability Engineer
Focusing on MySQL and PostgreSQL
Expertise on performance tuning and troubleshooting
Active blogger
About Me
Services on top open source databases
Founded in 2016
70 Members team
Assisted over 500+ Customers
AWS Partner and a PCI Certified Organisation
About Mydbops
Consulting
Services
Managed
Services
Focuses on Top Opensource database MySQL,
MongoDB and PostgreSQL
Mydbops Services
500 + Clients In 5 Yrs. of Operations
Our Clients
PostgreSQL 14
Overview	
• 	 Released on September 30, 2021
• With a lot of new features and bug fixed
• 	 Every Quarter - Minor Releases
• Every Third Quarter - Major Release
New Features
PostgreSQL 14 Features
Server Side Enhancements
01
Improvements to Query Parallelism
02
Security Enhancements
03
Features for Application Developers
04
PostgreSQL 14 Features
Enhancements in Monitoring
05
Enhancements in pg_stat_statements
06
Server Side Enhancements
Reducing B Tree Index Bloat
BLOAT
• Dead tuples - Bloat
• Vacuum - Mark it as Reusable
• Index page scan becomes huge
• Unnecessary pages in RAM
Heap Only Tuple
• Version 8.3
• No need to update index
Heap Only Tuple
Index
ID
1
2
3
 
 
 
 
ID Name
3 X
2 Y
1 Z
   
   
   
   
Table
Heap Only Tuple
Index
ID
1
2
3
 
 
 
 
ID Name
3 X
2 Y
1 Z
   
   
   
   
Table
Heap Only Tuple
Index
ID
1
2
3
 
 
 
 
ID Name
3 X
2 Y
1 Z
   
   
 3  A
   
Table
Heap Only Tuple
Index
ID
1
2
3
 
 3
 
 
ID Name
3 X
2 Y
1 Z
   
   
 3  A
   
Table
Heap Only Tuple
Index
ID
1
2
3
 
 
 
 
ID Name
3 X
2 Y
1 Z
   
   
   
   
Table
Heap Only Tuple
Index
ID
1
2
3
 
 
 
 
ID Name
3 X
2 Y
1 Z
   
   
 3  A
   
Table
Heap Only Tuple
Index
ID
1
2
3
 
 
 
 
ID Name
3 X
2 Y
1 Z
   
   
 3  A
   
Table
Killed Index Tuples
CREATE UNLOGGED TABLE test (
id integer PRIMARY KEY,
val text NOT NULL
) WITH (autovacuum_enabled = off);
INSERT INTO test SELECT i, 'text number ' || i FROM
generate_series(1, 1000000) AS i;
DELETE FROM test WHERE id BETWEEN 501 AND 799500;
Analyze test;
Killed Index Tuples
test=# EXPLAIN (ANALYZE, BUFFERS, COSTS OFF, TIMING OFF)
SELECT * FROM test WHERE id BETWEEN 1 AND 800000;
QUERY PLAN
----------------------------------------------------------
-----
Index Scan using test_pkey on test (actual rows=1000
loops=1)
Index Cond: ((id >= 1) AND (id <= 800000))
Buffers: shared hit=7284
Planning:
Buffers: shared hit=25
Planning Time: 0.204 ms
Execution Time: 95.223 ms
(7 rows)
Killed Index Tuples
test=# EXPLAIN (ANALYZE, BUFFERS, COSTS OFF, TIMING OFF)
SELECT * FROM test WHERE id BETWEEN 1 AND 800000;
QUERY PLAN
----------------------------------------------------------
-----
Index Scan using test_pkey on test (actual rows=1000
loops=1)
Index Cond: ((id >= 1) AND (id <= 800000))
Buffers: shared hit=2196
Planning:
Buffers: shared hit=8
Planning Time: 0.196 ms
Execution Time: 3.815 ms
(7 rows)
Bottom-Up Index Tuple Deletion
• Introduced in Version 14
• Delete Index Entries pointed to dead tuples
• Performing the operation of the VACUUM
Replicating in-progress Transactions
Logical Replication - Publisher/Subscriber Model
Source/Publisher Destination/Subscriber
WAL Records WAL Sender Logical Replication
Launcher
Publication
Decoding
Logical
Messages
Subscriber
logical_decoding_work_mem - PostgreSQL 13
Source/Publisher Destination/Subscriber
WAL Records WAL Sender Logical Replication
Launcher
Publication
Decoding
Logical
Messages
Subscriber
logical_decoding_work_
mem
Logical Decode Work Mem - PostgreSQL 13
Source/Publisher Destination/Subscriber
WAL Records WAL Sender Logical Replication
Launcher
Publication
Decoding
Logical
Messages
Subscriber
logical_decoding_work_
mem
Streaming in-progress Transactions - PostgreSQL 14
Source/Publisher Destination/Subscriber
WAL Records WAL Sender Logical Replication
Launcher
Publication
Decoding
Logical
Messages
Subscriber
logical_decoding_work_
mem
Streaming in-progress Transactions - PostgreSQL 14
[publisher configurations]:
wal_level = logical
synchronous_standby_names = '*'
logical_decoding_work_mem=1MB
Streaming in-progress Transactions - PostgreSQL 14
[publisher setup]
CREATE TABLE test(id int ,name text);
CREATE PUBLICATION test_pub FOR TABLE test;
[subscription setup]
CREATE TABLE test(id int ,name text);
CREATE SUBSCRIPTION test_sub CONNECTION 'host=127.0.0.1
port=5432 dbname=postgres' PUBLICATION test_pub;
Streaming in-progress Transactions - PostgreSQL 14
[publisher setup]
CREATE TABLE test(id int ,name text);
CREATE PUBLICATION test_pub FOR TABLE test;
[subscription setup]
CREATE TABLE test(id int ,name text);
CREATE SUBSCRIPTION test_sub CONNECTION 'host=127.0.0.1
port=5432 dbname=postgres' PUBLICATION test_pub;
Streaming in-progress Transactions - PostgreSQL 14
INSERT INTO test SELECT i, REPEAT('x', 10) FROM
generate_series(1,5000000) AS i;
Time Taken for Commit : 6.7 Secs
Streaming in-progress Transactions - PostgreSQL 14
ALTER SUBSCRIPTION test_sub SET(STREAMING = ON)
INSERT INTO test SELECT i, REPEAT('x', 10) FROM
generate_series(1,5000000) AS i;
Time taken for COMMIT: 3.5 secs
LZ4 Toast Compression
TOAST
• Mechanism used to avoid exceeding the size of data block
• Try Compress
• Wider field values into a smaller pieces
• Default, this is 2KB
• Every table will have its own toast table
New Compression Configuration
• pglz - in built compression
• lz4 - New compression method added
• New Configuration Variable - default_toast_compression
• --with-lz4
Idle Session Timeout
Idle_session_timeout
postgres=# select name,setting,unit,user,pending_restart from
pg_settings where name = 'idle_session_timeout' gx
-[ RECORD 1 ]---+---------------------
name | idle_session_timeout
setting | 0
unit | ms
context | user
pending_restart | f
• New variable
• Almost the same purpose of idle_in_transaction_session_timeout
idle_in_transaction_session_timeout
postgres=# select name,setting,unit,context,pending_restart from
pg_settings where name = 'idle_in_transaction_session_timeout' gx
-[ RECORD 1 ]---+------------------------------------
name | idle_in_transaction_session_timeout
setting | 0
unit | ms
context | user
pending_restart | f
• Available since PostgreSQL 9.6
• Maximum allowed idle time between queries, when in transaction
idle_in_transaction_session_timeout
postgres=# set idle_in_transaction_session_timeout=2000;
SET
postgres=#
postgres=# start transaction;
START TRANSACTION
postgres=*# select name,setting from pg_settings where name =
'idle_in_transaction_session_timeout' gx
-[ RECORD 1 ]--------------------------------
name | idle_in_transaction_session_timeout
setting | 2000
• Opened a new session
• Started a new transaction
idle_in_transaction_session_timeout
postgres=# start transaction;
START TRANSACTION
postgres=*# select name,setting from pg_settings where name =
'idle_in_transaction_session_timeout' gx
-[ RECORD 1 ]--------------------------------
name | idle_in_transaction_session_timeout
setting | 2000
postgres=*# select name,setting from pg_settings where name =
'idle_in_transaction_session_timeout' gx
FATAL: terminating connection due to idle-in-transaction timeout
server closed the connection unexpectedly
	 This probably means the server terminated abnormally
	 before or while processing the request.
The connection to the server was lost. Attempting reset:
Succeeded.
postgres=#
Idle_session_timeout
postgres=# set Idle_session_timeout = 5000;
SET
postgres=# select name,setting from pg_settings where name =
'idle_in_transaction_session_timeout' gx
-[ RECORD 1 ]--------------------------------
name | idle_in_transaction_session_timeout
setting | 0
postgres=# select name,setting from pg_settings where name =
'idle_in_transaction_session_timeout' gx
FATAL: terminating connection due to idle-session timeout
server closed the connection unexpectedly
	 This probably means the server terminated abnormally
	 before or while processing the request.
The connection to the server was lost. Attempting reset:
Succeeded.
• Maximum idle time between the queries, when not in a transaction
Idle_session_timeout
• Helps to handle idle connections
• Apply only for the upcoming connections
• Should have higher value than application timeout
Improvement to Query Parallelism
Refresh Materialized View
Materialized View
• Database Object
• Stores only the result of the query
• Refresh needs to be done
Materialized View
(postgres13)=> CREATE TABLE table_test (grp int, data numeric);
CREATE TABLE
Time: 3.282 ms
(postgres13)=>
(postgres13)=> INSERT INTO table_test SELECT 100, random() FROM
generate_series(1, 5000000);
INSERT 0 5000000
Time: 8064.350 ms (00:08.064)
(postgres13)=>
(postgres13)=> INSERT INTO table_test SELECT 200, random() FROM
generate_series(1, 5000000);
INSERT 0 5000000
Time: 9250.536 ms (00:09.251)
(postgres13)=>
• Loaded data in PostgreSQL 13
Materialized View
(postgres14)=> CREATE TABLE table_test (grp int, data numeric);
CREATE TABLE
Time: 13.577 ms
(postgres14)=>
(postgres14)=> INSERT INTO table_test SELECT 100, random() FROM
generate_series(1, 5000000);
INSERT 0 5000000
Time: 7204.919 ms (00:07.205)
(postgres14)=>
(postgres14)=> INSERT INTO table_test SELECT 200, random() FROM
generate_series(1, 5000000);
INSERT 0 5000000
Time: 8618.068 ms (00:08.618)
• Loaded data in PostgreSQL 14
Materialized View
(postgres13)=> SELECT grp, avg(data), count(*) FROM table_test
GROUP BY 1;
grp | avg | count
-----+-------------------------+---------
100 | 0.499979689549607876793 | 5000000
200 | 0.500123546167930387393 | 5000000
(2 rows)
Time: 960.802 ms
(postgres14)=> SELECT grp, avg(data), count(*) FROM table_test
GROUP BY 1;
grp | avg | count
-----+-------------------------+---------
100 | 0.500173019196875804072 | 5000000
200 | 0.500065682335289928739 | 5000000
(2 rows)
Time: 965.867 ms
• Executing a query on both version
Materialized View
(postgres13)=> CREATE MATERIALIZED VIEW test_view AS SELECT grp,
avg(data), count(*) FROM table_test GROUP BY 1;
SELECT 2
Time: 983.138 ms
(postgres14)=> CREATE MATERIALIZED VIEW test_view AS SELECT grp,
avg(data), count(*) FROM table_test GROUP BY 1;
SELECT 2
Time: 978.225 ms
(postgres14)=>
• Creating materialized view for the static table
Materialized View
(postgres13)=> select * from test_view;
grp | avg | count
-----+-------------------------+---------
100 | 0.499979689549607876793 | 5000000
200 | 0.500123546167930387393 | 5000000
(2 rows)
Time: 0.561 ms
(postgres14)=> select * from test_view;
grp | avg | count
-----+-------------------------+---------
100 | 0.500173019196875804072 | 5000000
200 | 0.500065682335289928739 | 5000000
(2 rows)
Time: 0.540 ms
• Querying the materialized view instead of table
Refresh Materialized View
(postgres13)=> refresh materialized view test_view;
REFRESH MATERIALIZED VIEW
Time: 2027.911 ms (00:02.028)
(postgres13)=>
(postgres14)=> refresh materialized view test_view;
REFRESH MATERIALIZED VIEW
Time: 961.503 ms
• Refreshing the materialized view to get the updated result
Refresh Materialized View
(postgres14)=> explain SELECT grp, avg(data), count(*) FROM
table_test GROUP BY 1;
QUERY PLAN
------------------------------------------------------------------
------------------------------------
Finalize GroupAggregate (cost=127997.34..127997.87 rows=2
width=44)
Group Key: grp
-> Gather Merge (cost=127997.34..127997.81 rows=4 width=44)
Workers Planned: 2
-> Sort (cost=126997.32..126997.32 rows=2 width=44)
Sort Key: grp
...... on table_test (cost=0.00..95747.02
rows=4166702 width=15)
(9 rows)
Time: 0.438 ms
• Explain plan with parallel workers
Refresh Materialized View
• Other than refresh, remaining use parallelism
• For version < 14, it is better to recreate it than refresh
• During refresh, it will lock the content
• Concurrent option, but PK is mandatory
Return Query
Security Improvements
Default Authentication Method
Default Authentication Method - scram-sha-256
authentication method 10 not supported
• Previously MD5 hashing method
• Easy to reconstruct
• Expensive Hash Function
• Introduced in V10, now made it as default
• Client needs to support it
Predefined Roles
Predefined Roles - pg_read_all_data or pg_write_all_data
• Be a superuser
• Provide access to the complete database
Predefined Roles - pg_read_all_data
• Read all data(Tables, views and sequences)
• Select access on objects and usage access on schemas
Predefined Roles - pg_write_all_data
• Write all data(Tables, views and sequences)
• Insert, Update, Delete access on objects and usage access on schemas
Predefined Roles - pg_read_all_data or pg_write_all_data
(postgres14)=> create user testing;
CREATE ROLE
Time: 5.468 ms
(postgres14)=> set role to 'testing';
SET
Time: 0.392 ms
(postgres14)=> SELECT grp, avg(data), count(*) FROM table_test
GROUP BY 1;
ERROR: permission denied for table table_test
Time: 2.010 ms
(postgres14)=>
Predefined Roles - pg_read_all_data or pg_write_all_data
(postgres14)=> reset role;
RESET
Time: 0.189 ms
(postgres14)=> grant pg_read_all_data to testing;
GRANT ROLE
Time: 4.202 ms
(postgres14)=>
(postgres14)=> set role to 'testing';
SET
Time: 0.148 ms
(postgres14)=> SELECT grp, avg(data), count(*) FROM table_test
GROUP BY 1;
grp | avg | count
-----+-------------------------+---------
100 | 0.500173019196875804072 | 5000000
200 | 0.500065682335289928739 | 5000000
(2 rows)
Features for Application Developers
New JSON Syntax
New JSON Syntax
Old Syntax:
where (jsonb_column->>'name')::varchar = 'Aakash:
DBRE(2021)';
New Syntax:
where jsonb_column['name'] = '"Aakash: DBRE(2021)"';
• Since PostgreSQL 9.2
• Had unique syntax
Multirange Datatypes
Range
• Beginning and End value
• Datetime or Integer
• This train is running between X and Y
Range - Classic SQL
create table track_activity (
id serial primary key,
appname varchar(100),
start_ts timestamp,
end_ts timestamp
);
insert into track_activity (appname,start_ts,end_ts) values
('Whatapp', '2021-10-01 4:00', '2021-10-01 7:00');
insert into track_activity (appname,start_ts,end_ts) values
('Facebook', '2021-10-01 7:00', '2021-10-01 11:00');
insert into track_activity (appname,start_ts,end_ts) values
('Whatsapp', '2021-10-01 11:00', '2021-10-01 21:00');
insert into track_activity (appname,start_ts,end_ts) values
('Instagram', '2021-10-01 22:00', '2021-10-01 23:00');
Range - Classic SQL
(postgres14)=> select * from track_activity;
id | appname | start_ts | end_ts
----+-----------+---------------------+---------------------
1 | Whatapp | 2021-10-20 04:00:00 | 2021-10-20 07:00:00
2 | Facebook | 2021-10-20 07:00:00 | 2021-10-20 11:00:00
3 | Whatsapp | 2021-10-20 11:00:00 | 2021-10-20 21:00:00
4 | Instagram | 2021-10-20 22:00:00 | 2021-10-20 23:00:00
(4 rows)
Range - Classic SQL
(postgres14)=> select id, appname from track_activity where
start_ts <= '2021-10-20 11:00'::timestamp and end_ts >= '2021-10-
20 12:00'::timestamp;
id | appname
----+----------
3 | Whatsapp
(1 row)
Range - Classic SQL
(postgres14)=> select id, appname from track_activity where
(start_ts <= '2021-10-20 11:00'::timestamp and end_ts >= '2021-10-
20 12:00'::timestamp) or (start_ts <= '2021-10-20
17:00'::timestamp and end_ts >= '2021-10-20 18:00'::timestamp);
id | appname
----+----------
3 | Whatsapp
(1 row)
Range - Using Range Data Type
create table track_activity_range (
id serial primary key,
appname varchar(100),
time_ts tsrange
);
insert into track_activity_range (appname,time_ts) values
('Whatapp', '[2021-10-20 4:00, 2021-10-20 7:00]');
insert into track_activity_range (appname,time_ts) values
('Facebook', '[2021-10-20 7:00, 2021-10-20 11:00]');
insert into track_activity_range (appname,time_ts) values
('Whatsapp', '[2021-10-20 11:00, 2021-10-20 21:00]');
insert into track_activity_range (appname,time_ts) values
('Instagram', '[2021-10-20 22:00, 2021-10-20 23:00]');
Range - Using Range Data Type
(postgres14)=> select id, appname from track_activity_range where
time_ts @> '2021-10-20 12:00'::timestamp;
id | appname
----+----------
3 | Whatsapp
(1 row)
Range - Using Range Data Type
(postgres14)=> select * from track_activity_range where id=3;
id | appname | time_ts
----+----------+-----------------------------------------------
3 | Whatsapp | ["2021-10-20 11:00:00","2021-10-20 21:00:00"]
(1 row)
(postgres14)=> update track_activity_range set time_ts = time_ts -
'[2021-10-20 11:00,2021-10-20 11:30]' where id = 3;
UPDATE 1
(postgres14)=> select * from track_activity_range where id=3;
id | appname | time_ts
----+----------+-----------------------------------------------
3 | Whatsapp | ("2021-10-20 11:30:00","2021-10-20 21:00:00"]
(1 row)
Range - Using Range Data Type
(postgres14)=> select * from track_activity_range where id=3;
id | appname | time_ts
----+----------+-----------------------------------------------
3 | Whatsapp | ("2021-10-20 11:30:00","2021-10-20 21:00:00"]
(1 row)
(postgres14)=> update track_activity_range set time_ts = time_ts -
'[2021-10-20 12:00,2021-10-20 12:30]' where id = 3;
ERROR: result of range difference would not be contiguous
Time: 0.594 ms
(postgres14)=>
Range - Using Multi Range Data Type
create table track_activity_multirange (
id serial primary key,
appname varchar(100),
time_ts tsmultirange
);
insert into track_activity_multirange (appname,time_ts) values
('Whatapp', '{[2021-10-20 4:00, 2021-10-20 7:00]}');
insert into track_activity_multirange (appname,time_ts) values
('Facebook', '{[2021-10-20 7:00, 2021-10-20 11:00]}');
insert into track_activity_multirange (appname,time_ts) values
('Whatsapp', '{[2021-10-20 11:00, 2021-10-20 21:00]}');
insert into track_activity_multirange (appname,time_ts) values
('Instagram', '{[2021-10-20 22:00, 2021-10-20 23:00]}');
Range - Using Multi Range Data Type
(postgres14)=> select * from track_activity_multirange;
id | appname | time_ts
----+-----------+-------------------------------------------------
1 | Whatapp | {["2021-10-20 04:00:00","2021-10-20 07:00:00"]}
2 | Facebook | {["2021-10-20 07:00:00","2021-10-20 11:00:00"]}
3 | Whatsapp | {["2021-10-20 11:00:00","2021-10-20 21:00:00"]}
4 | Instagram | {["2021-10-20 22:00:00","2021-10-20 23:00:00"]}
(4 rows)
Range - Using Multi Range Data Type
(postgres14)=> update track_activity_multirange set time_ts =
time_ts - '{[2021-10-20 12:00,2021-10-20 12:30]}' where id = 3;
UPDATE 1
(postgres14)=> select * from track_activity_multirange where id=3;
id | appname |
time_ts
----+----------+--------------------------------------------------
---------------------------------------------
3 | Whatsapp | {["2021-10-20 11:00:00","2021-10-20 12:00:00"),
("2021-10-20 12:30:00","2021-10-20 21:00:00"]}
(1 row)
OUT Parameter for Stored Procedure
Out Parameter for Stored Procedure
• Only function has been supported
• Stored Procedure since V11
• Only supports IN , INOUT parameters
• More compatible with Oracle's Implementation of procedures
Enhancements in Monitoring
pg_stat_progress_copy
pg_stat_progress_copy
ppostgres=# select * from pg_stat_progress_copy watch 1
Tue Oct 19 13:25:45
2021 (every 1s)
pid | datid | datname | relid | command | type |
bytes_processed | bytes_total | tuples_processed |
tuples_excluded
-----+-------+---------+-------+---------+------+-------------
----+-------------+------------------+-----------------
(0 rows)
• Reporting progress of COPY commands
pg_stat_progress_copy
postgres=# copy ( select i, now() - random() * '2
years'::interval, random()
from generate_series(1,10000000) i) to /dev/null;
• Session 1 - Load CSV with 10M records
pg_stat_progress_copy
Tue Oct 19 13:31:31
2021 (every 1s)
pid | datid | datname | relid | command | type |
bytes_processed | bytes_total | tuples_processed |
tuples_excluded
-------+-------+----------+-------+---------+------+----------
-------+-------------+------------------+-----------------
28382 | 14486 | postgres | 0 | COPY TO | PIPE |
2267809 | 0 | 41232 | 0
(1 row)
• Session 2 - Monitoring pg_stat_progress_copy table continuously
pg_stat_progress_copy
datname | command | tuples_processed
----------+---------+------------------
postgres | COPY TO | 0
(1 row)
Tue Oct 19 13:37:24 2021 (every 1s)
datname | command | tuples_processed
----------+---------+------------------
postgres | COPY TO | 704370
(1 row)
Tue Oct 19 13:37:25 2021 (every 1s)
datname | command | tuples_processed
----------+---------+------------------
postgres | COPY TO | 1612781
(1 row)
pg_stat_progress_copy
postgres=# copy ( select i, now() - random() * '2
years'::interval, random()
from generate_series(1,10000000) i) to /dev/null;
COPY 10000000
postgres=#
datname | command | tuples_processed
---------+---------+------------------
(0 rows)
Tue Oct 19 13:40:26 2021 (every 1s)
datname | command | tuples_processed
---------+---------+------------------
(0 rows)
Enhancements in pg_stat_statements
pg_stat_statements
• Very useful extension
• Tracking execution statistics of all SQL statements
• Inexpensive
• Supports in all infra
• Load this data in grafana
• Stores the hash value
pg_stat_statements
pg_stat_statements
• When the given queries happened
• Queries with exact values
Enhancements in pg_stat_statements
postgres=# select * from pg_stat_statements_info;
dealloc | stats_reset
---------+-------------------------------
0 | 2021-10-18 14:28:12.693117+00
(1 row)
• System View - pg_stat_statements_info
• Shows since the query stats are available
Enhancements in pg_stat_statements
postgres=# select queryid,query from pg_stat_statements limit 1;
queryid | query
---------------------+---------------------------------------------
-
2403671352246453279 | select queryid,query from pg_stat_statements
(1 row)
In Error log:
2021-10-18 14:55:15 UTC [10840]: [2-1]
user=postgres,db=postgres,app=psql,client=
[local],query_id=2403671352246453279 LOG: duration: 3.580 ms
• New log prefix added - %Q
• Also store the query id in the log file as well
Enhancements in pg_stat_statements
postgres=# select query,rows from pg_stat_statements limit 1;
query | rows
----------------------------------------------+------
select queryid,query from pg_stat_statements | 7
(1 row)
• New column added - rows
• No of rows processed
Planning for Upgrade
• Logical / Physical Upgrade
• Perform dry-run if possible
• Version < 12 requires reindexing.
• Analyze needs to be performed
• Perform functional testing of application
References
• https://sql-info.de/postgresql/postgresql-14/articles-
about-new-features-in-postgresql-14.html
• https://www.cybertec-postgresql.com/en/index-bloat-
reduced-in-postgresql-v14/
• https://www.postgresql.org/about/news/postgresql-14-
released-2318/
• https://www.enterprisedb.com/blog/logical-decoding-large-
progress-transactions-postgresql
Reach Us : Info@mydbops.com
Thank You

Más contenido relacionado

La actualidad más candente

PostgreSQL Performance Tuning
PostgreSQL Performance TuningPostgreSQL Performance Tuning
PostgreSQL Performance Tuning
elliando dias
 

La actualidad más candente (20)

ProxySQL for MySQL
ProxySQL for MySQLProxySQL for MySQL
ProxySQL for MySQL
 
Mvcc in postgreSQL 권건우
Mvcc in postgreSQL 권건우Mvcc in postgreSQL 권건우
Mvcc in postgreSQL 권건우
 
MySQL Advanced Administrator 2021 - 네오클로바
MySQL Advanced Administrator 2021 - 네오클로바MySQL Advanced Administrator 2021 - 네오클로바
MySQL Advanced Administrator 2021 - 네오클로바
 
PostgreSQL Performance Tuning
PostgreSQL Performance TuningPostgreSQL Performance Tuning
PostgreSQL Performance Tuning
 
Hadoop Meetup Jan 2019 - Router-Based Federation and Storage Tiering
Hadoop Meetup Jan 2019 - Router-Based Federation and Storage TieringHadoop Meetup Jan 2019 - Router-Based Federation and Storage Tiering
Hadoop Meetup Jan 2019 - Router-Based Federation and Storage Tiering
 
MySQL 상태 메시지 분석 및 활용
MySQL 상태 메시지 분석 및 활용MySQL 상태 메시지 분석 및 활용
MySQL 상태 메시지 분석 및 활용
 
Replication Troubleshooting in Classic VS GTID
Replication Troubleshooting in Classic VS GTIDReplication Troubleshooting in Classic VS GTID
Replication Troubleshooting in Classic VS GTID
 
Postgresql database administration volume 1
Postgresql database administration volume 1Postgresql database administration volume 1
Postgresql database administration volume 1
 
Analytics at Speed: Introduction to ClickHouse and Common Use Cases. By Mikha...
Analytics at Speed: Introduction to ClickHouse and Common Use Cases. By Mikha...Analytics at Speed: Introduction to ClickHouse and Common Use Cases. By Mikha...
Analytics at Speed: Introduction to ClickHouse and Common Use Cases. By Mikha...
 
Top 10 Mistakes When Migrating From Oracle to PostgreSQL
Top 10 Mistakes When Migrating From Oracle to PostgreSQLTop 10 Mistakes When Migrating From Oracle to PostgreSQL
Top 10 Mistakes When Migrating From Oracle to PostgreSQL
 
MySQL_MariaDB로의_전환_기술요소-202212.pptx
MySQL_MariaDB로의_전환_기술요소-202212.pptxMySQL_MariaDB로의_전환_기술요소-202212.pptx
MySQL_MariaDB로의_전환_기술요소-202212.pptx
 
PostgreSQL Extensions: A deeper look
PostgreSQL Extensions:  A deeper lookPostgreSQL Extensions:  A deeper look
PostgreSQL Extensions: A deeper look
 
PostgreSQL Replication High Availability Methods
PostgreSQL Replication High Availability MethodsPostgreSQL Replication High Availability Methods
PostgreSQL Replication High Availability Methods
 
The Full MySQL and MariaDB Parallel Replication Tutorial
The Full MySQL and MariaDB Parallel Replication TutorialThe Full MySQL and MariaDB Parallel Replication Tutorial
The Full MySQL and MariaDB Parallel Replication Tutorial
 
MySQL Performance Tuning: Top 10 Tips
MySQL Performance Tuning: Top 10 TipsMySQL Performance Tuning: Top 10 Tips
MySQL Performance Tuning: Top 10 Tips
 
Using galera replication to create geo distributed clusters on the wan
Using galera replication to create geo distributed clusters on the wanUsing galera replication to create geo distributed clusters on the wan
Using galera replication to create geo distributed clusters on the wan
 
Data Warehouses in Kubernetes Visualized: the ClickHouse Kubernetes Operator UI
Data Warehouses in Kubernetes Visualized: the ClickHouse Kubernetes Operator UIData Warehouses in Kubernetes Visualized: the ClickHouse Kubernetes Operator UI
Data Warehouses in Kubernetes Visualized: the ClickHouse Kubernetes Operator UI
 
Mastering PostgreSQL Administration
Mastering PostgreSQL AdministrationMastering PostgreSQL Administration
Mastering PostgreSQL Administration
 
PostgreSQL Deep Internal
PostgreSQL Deep InternalPostgreSQL Deep Internal
PostgreSQL Deep Internal
 
Deep Dive into Spark SQL with Advanced Performance Tuning with Xiao Li & Wenc...
Deep Dive into Spark SQL with Advanced Performance Tuning with Xiao Li & Wenc...Deep Dive into Spark SQL with Advanced Performance Tuning with Xiao Li & Wenc...
Deep Dive into Spark SQL with Advanced Performance Tuning with Xiao Li & Wenc...
 

Similar a What is new in PostgreSQL 14?

Scaling MySQL Strategies for Developers
Scaling MySQL Strategies for DevelopersScaling MySQL Strategies for Developers
Scaling MySQL Strategies for Developers
Jonathan Levin
 

Similar a What is new in PostgreSQL 14? (20)

What's New in MariaDB Server 10.2 and MariaDB MaxScale 2.1
What's New in MariaDB Server 10.2 and MariaDB MaxScale 2.1What's New in MariaDB Server 10.2 and MariaDB MaxScale 2.1
What's New in MariaDB Server 10.2 and MariaDB MaxScale 2.1
 
What's New in MariaDB Server 10.2 and MariaDB MaxScale 2.1
What's New in MariaDB Server 10.2 and MariaDB MaxScale 2.1What's New in MariaDB Server 10.2 and MariaDB MaxScale 2.1
What's New in MariaDB Server 10.2 and MariaDB MaxScale 2.1
 
PostgreSQL 9.5 - Major Features
PostgreSQL 9.5 - Major FeaturesPostgreSQL 9.5 - Major Features
PostgreSQL 9.5 - Major Features
 
Query Optimization with MySQL 5.6: Old and New Tricks
Query Optimization with MySQL 5.6: Old and New TricksQuery Optimization with MySQL 5.6: Old and New Tricks
Query Optimization with MySQL 5.6: Old and New Tricks
 
Advanced Query Optimizer Tuning and Analysis
Advanced Query Optimizer Tuning and AnalysisAdvanced Query Optimizer Tuning and Analysis
Advanced Query Optimizer Tuning and Analysis
 
query_tuning.pdf
query_tuning.pdfquery_tuning.pdf
query_tuning.pdf
 
(DAT402) Amazon RDS PostgreSQL:Lessons Learned & New Features
(DAT402) Amazon RDS PostgreSQL:Lessons Learned & New Features(DAT402) Amazon RDS PostgreSQL:Lessons Learned & New Features
(DAT402) Amazon RDS PostgreSQL:Lessons Learned & New Features
 
Oracle SQL Tuning
Oracle SQL TuningOracle SQL Tuning
Oracle SQL Tuning
 
MySQL 5.7 in a Nutshell
MySQL 5.7 in a NutshellMySQL 5.7 in a Nutshell
MySQL 5.7 in a Nutshell
 
Aioug vizag oracle12c_new_features
Aioug vizag oracle12c_new_featuresAioug vizag oracle12c_new_features
Aioug vizag oracle12c_new_features
 
SQL – A Tutorial I
SQL – A Tutorial  ISQL – A Tutorial  I
SQL – A Tutorial I
 
Scaling MySQL Strategies for Developers
Scaling MySQL Strategies for DevelopersScaling MySQL Strategies for Developers
Scaling MySQL Strategies for Developers
 
Oracle Database Performance Tuning Basics
Oracle Database Performance Tuning BasicsOracle Database Performance Tuning Basics
Oracle Database Performance Tuning Basics
 
Migration to ClickHouse. Practical guide, by Alexander Zaitsev
Migration to ClickHouse. Practical guide, by Alexander ZaitsevMigration to ClickHouse. Practical guide, by Alexander Zaitsev
Migration to ClickHouse. Practical guide, by Alexander Zaitsev
 
Oracle Database Performance Tuning Advanced Features and Best Practices for DBAs
Oracle Database Performance Tuning Advanced Features and Best Practices for DBAsOracle Database Performance Tuning Advanced Features and Best Practices for DBAs
Oracle Database Performance Tuning Advanced Features and Best Practices for DBAs
 
My sql 56_roadmap_april2012
My sql 56_roadmap_april2012My sql 56_roadmap_april2012
My sql 56_roadmap_april2012
 
DPC Tutorial
DPC TutorialDPC Tutorial
DPC Tutorial
 
PostgreSQL 15 and its Major Features -(Aakash M - Mydbops) - Mydbops Opensour...
PostgreSQL 15 and its Major Features -(Aakash M - Mydbops) - Mydbops Opensour...PostgreSQL 15 and its Major Features -(Aakash M - Mydbops) - Mydbops Opensour...
PostgreSQL 15 and its Major Features -(Aakash M - Mydbops) - Mydbops Opensour...
 
Performance Schema for MySQL Troubleshooting
Performance Schema for MySQL TroubleshootingPerformance Schema for MySQL Troubleshooting
Performance Schema for MySQL Troubleshooting
 
query-optimization-techniques_talk.pdf
query-optimization-techniques_talk.pdfquery-optimization-techniques_talk.pdf
query-optimization-techniques_talk.pdf
 

Más de Mydbops

Más de Mydbops (20)

Scale your database traffic with Read & Write split using MySQL Router
Scale your database traffic with Read & Write split using MySQL RouterScale your database traffic with Read & Write split using MySQL Router
Scale your database traffic with Read & Write split using MySQL Router
 
PostgreSQL Schema Changes with pg-osc - Mydbops @ PGConf India 2024
PostgreSQL Schema Changes with pg-osc - Mydbops @ PGConf India 2024PostgreSQL Schema Changes with pg-osc - Mydbops @ PGConf India 2024
PostgreSQL Schema Changes with pg-osc - Mydbops @ PGConf India 2024
 
Choosing the Right Database: Exploring MySQL Alternatives for Modern Applicat...
Choosing the Right Database: Exploring MySQL Alternatives for Modern Applicat...Choosing the Right Database: Exploring MySQL Alternatives for Modern Applicat...
Choosing the Right Database: Exploring MySQL Alternatives for Modern Applicat...
 
Mastering Aurora PostgreSQL Clusters for Disaster Recovery
Mastering Aurora PostgreSQL Clusters for Disaster RecoveryMastering Aurora PostgreSQL Clusters for Disaster Recovery
Mastering Aurora PostgreSQL Clusters for Disaster Recovery
 
Navigating Transactions: ACID Complexity in Modern Databases- Mydbops Open So...
Navigating Transactions: ACID Complexity in Modern Databases- Mydbops Open So...Navigating Transactions: ACID Complexity in Modern Databases- Mydbops Open So...
Navigating Transactions: ACID Complexity in Modern Databases- Mydbops Open So...
 
AWS RDS in MySQL 2023 Vinoth Kanna @ Mydbops OpenSource Database Meetup 15
AWS RDS in MySQL 2023 Vinoth Kanna @ Mydbops OpenSource Database Meetup 15AWS RDS in MySQL 2023 Vinoth Kanna @ Mydbops OpenSource Database Meetup 15
AWS RDS in MySQL 2023 Vinoth Kanna @ Mydbops OpenSource Database Meetup 15
 
Data-at-scale-with-TIDB Mydbops Co-Founder Kabilesh PR at LSPE Event
Data-at-scale-with-TIDB Mydbops Co-Founder Kabilesh PR at LSPE EventData-at-scale-with-TIDB Mydbops Co-Founder Kabilesh PR at LSPE Event
Data-at-scale-with-TIDB Mydbops Co-Founder Kabilesh PR at LSPE Event
 
MySQL Transformation Case Study: 80% Cost Savings & Uninterrupted Availabilit...
MySQL Transformation Case Study: 80% Cost Savings & Uninterrupted Availabilit...MySQL Transformation Case Study: 80% Cost Savings & Uninterrupted Availabilit...
MySQL Transformation Case Study: 80% Cost Savings & Uninterrupted Availabilit...
 
Scaling-MongoDB-with-Horizontal-and-Vertical-Sharding Mydbops Opensource Data...
Scaling-MongoDB-with-Horizontal-and-Vertical-Sharding Mydbops Opensource Data...Scaling-MongoDB-with-Horizontal-and-Vertical-Sharding Mydbops Opensource Data...
Scaling-MongoDB-with-Horizontal-and-Vertical-Sharding Mydbops Opensource Data...
 
Mastering MongoDB Atlas: Essentials of Diagnostics and Debugging in the Cloud...
Mastering MongoDB Atlas: Essentials of Diagnostics and Debugging in the Cloud...Mastering MongoDB Atlas: Essentials of Diagnostics and Debugging in the Cloud...
Mastering MongoDB Atlas: Essentials of Diagnostics and Debugging in the Cloud...
 
Data Organisation: Table Partitioning in PostgreSQL
Data Organisation: Table Partitioning in PostgreSQLData Organisation: Table Partitioning in PostgreSQL
Data Organisation: Table Partitioning in PostgreSQL
 
Navigating MongoDB's Queryable Encryption for Ultimate Security - Mydbops
Navigating MongoDB's Queryable Encryption for Ultimate Security - MydbopsNavigating MongoDB's Queryable Encryption for Ultimate Security - Mydbops
Navigating MongoDB's Queryable Encryption for Ultimate Security - Mydbops
 
Data High Availability With TIDB
Data High Availability With TIDBData High Availability With TIDB
Data High Availability With TIDB
 
Mastering Database Migration_ Native replication (8.0) to InnoDB Cluster (8.0...
Mastering Database Migration_ Native replication (8.0) to InnoDB Cluster (8.0...Mastering Database Migration_ Native replication (8.0) to InnoDB Cluster (8.0...
Mastering Database Migration_ Native replication (8.0) to InnoDB Cluster (8.0...
 
Enhancing Security of MySQL Connections using SSL certificates
Enhancing Security of MySQL Connections using SSL certificatesEnhancing Security of MySQL Connections using SSL certificates
Enhancing Security of MySQL Connections using SSL certificates
 
Exploring the Fundamentals of YugabyteDB - Mydbops
Exploring the Fundamentals of YugabyteDB - Mydbops Exploring the Fundamentals of YugabyteDB - Mydbops
Exploring the Fundamentals of YugabyteDB - Mydbops
 
Time series in MongoDB - Mydbops
Time series in MongoDB - Mydbops Time series in MongoDB - Mydbops
Time series in MongoDB - Mydbops
 
TiDB in a Nutshell - Power of Open-Source Distributed SQL Database - Mydbops
TiDB in a Nutshell - Power of Open-Source Distributed SQL Database - MydbopsTiDB in a Nutshell - Power of Open-Source Distributed SQL Database - Mydbops
TiDB in a Nutshell - Power of Open-Source Distributed SQL Database - Mydbops
 
Achieving High Availability in PostgreSQL
Achieving High Availability in PostgreSQLAchieving High Availability in PostgreSQL
Achieving High Availability in PostgreSQL
 
Scaling MongoDB with Horizontal and Vertical Sharding
Scaling MongoDB with Horizontal and Vertical Sharding Scaling MongoDB with Horizontal and Vertical Sharding
Scaling MongoDB with Horizontal and Vertical Sharding
 

Último

Seal of Good Local Governance (SGLG) 2024Final.pptx
Seal of Good Local Governance (SGLG) 2024Final.pptxSeal of Good Local Governance (SGLG) 2024Final.pptx
Seal of Good Local Governance (SGLG) 2024Final.pptx
negromaestrong
 
Gardella_Mateo_IntellectualProperty.pdf.
Gardella_Mateo_IntellectualProperty.pdf.Gardella_Mateo_IntellectualProperty.pdf.
Gardella_Mateo_IntellectualProperty.pdf.
MateoGardella
 
Activity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfActivity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdf
ciinovamais
 
The basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxThe basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptx
heathfieldcps1
 
An Overview of Mutual Funds Bcom Project.pdf
An Overview of Mutual Funds Bcom Project.pdfAn Overview of Mutual Funds Bcom Project.pdf
An Overview of Mutual Funds Bcom Project.pdf
SanaAli374401
 
Beyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global ImpactBeyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global Impact
PECB
 
Gardella_PRCampaignConclusion Pitch Letter
Gardella_PRCampaignConclusion Pitch LetterGardella_PRCampaignConclusion Pitch Letter
Gardella_PRCampaignConclusion Pitch Letter
MateoGardella
 

Último (20)

Seal of Good Local Governance (SGLG) 2024Final.pptx
Seal of Good Local Governance (SGLG) 2024Final.pptxSeal of Good Local Governance (SGLG) 2024Final.pptx
Seal of Good Local Governance (SGLG) 2024Final.pptx
 
Application orientated numerical on hev.ppt
Application orientated numerical on hev.pptApplication orientated numerical on hev.ppt
Application orientated numerical on hev.ppt
 
Grant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingGrant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy Consulting
 
Measures of Dispersion and Variability: Range, QD, AD and SD
Measures of Dispersion and Variability: Range, QD, AD and SDMeasures of Dispersion and Variability: Range, QD, AD and SD
Measures of Dispersion and Variability: Range, QD, AD and SD
 
Basic Civil Engineering first year Notes- Chapter 4 Building.pptx
Basic Civil Engineering first year Notes- Chapter 4 Building.pptxBasic Civil Engineering first year Notes- Chapter 4 Building.pptx
Basic Civil Engineering first year Notes- Chapter 4 Building.pptx
 
Código Creativo y Arte de Software | Unidad 1
Código Creativo y Arte de Software | Unidad 1Código Creativo y Arte de Software | Unidad 1
Código Creativo y Arte de Software | Unidad 1
 
Gardella_Mateo_IntellectualProperty.pdf.
Gardella_Mateo_IntellectualProperty.pdf.Gardella_Mateo_IntellectualProperty.pdf.
Gardella_Mateo_IntellectualProperty.pdf.
 
Z Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot GraphZ Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot Graph
 
Mehran University Newsletter Vol-X, Issue-I, 2024
Mehran University Newsletter Vol-X, Issue-I, 2024Mehran University Newsletter Vol-X, Issue-I, 2024
Mehran University Newsletter Vol-X, Issue-I, 2024
 
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...
 
SECOND SEMESTER TOPIC COVERAGE SY 2023-2024 Trends, Networks, and Critical Th...
SECOND SEMESTER TOPIC COVERAGE SY 2023-2024 Trends, Networks, and Critical Th...SECOND SEMESTER TOPIC COVERAGE SY 2023-2024 Trends, Networks, and Critical Th...
SECOND SEMESTER TOPIC COVERAGE SY 2023-2024 Trends, Networks, and Critical Th...
 
Key note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdfKey note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdf
 
Activity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfActivity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdf
 
The basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxThe basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptx
 
An Overview of Mutual Funds Bcom Project.pdf
An Overview of Mutual Funds Bcom Project.pdfAn Overview of Mutual Funds Bcom Project.pdf
An Overview of Mutual Funds Bcom Project.pdf
 
psychiatric nursing HISTORY COLLECTION .docx
psychiatric  nursing HISTORY  COLLECTION  .docxpsychiatric  nursing HISTORY  COLLECTION  .docx
psychiatric nursing HISTORY COLLECTION .docx
 
Beyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global ImpactBeyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global Impact
 
Unit-V; Pricing (Pharma Marketing Management).pptx
Unit-V; Pricing (Pharma Marketing Management).pptxUnit-V; Pricing (Pharma Marketing Management).pptx
Unit-V; Pricing (Pharma Marketing Management).pptx
 
Unit-IV- Pharma. Marketing Channels.pptx
Unit-IV- Pharma. Marketing Channels.pptxUnit-IV- Pharma. Marketing Channels.pptx
Unit-IV- Pharma. Marketing Channels.pptx
 
Gardella_PRCampaignConclusion Pitch Letter
Gardella_PRCampaignConclusion Pitch LetterGardella_PRCampaignConclusion Pitch Letter
Gardella_PRCampaignConclusion Pitch Letter
 

What is new in PostgreSQL 14?

  • 1. Aakash M, Database Reliability Engineer, Mydbops October 23, 2021 Mydbops 10th MyWebinar What is new in PostgreSQL 14
  • 2. Database Reliability Engineer Focusing on MySQL and PostgreSQL Expertise on performance tuning and troubleshooting Active blogger About Me
  • 3. Services on top open source databases Founded in 2016 70 Members team Assisted over 500+ Customers AWS Partner and a PCI Certified Organisation About Mydbops
  • 4. Consulting Services Managed Services Focuses on Top Opensource database MySQL, MongoDB and PostgreSQL Mydbops Services
  • 5. 500 + Clients In 5 Yrs. of Operations Our Clients
  • 7. Overview • Released on September 30, 2021 • With a lot of new features and bug fixed • Every Quarter - Minor Releases • Every Third Quarter - Major Release
  • 9. PostgreSQL 14 Features Server Side Enhancements 01 Improvements to Query Parallelism 02 Security Enhancements 03 Features for Application Developers 04
  • 10. PostgreSQL 14 Features Enhancements in Monitoring 05 Enhancements in pg_stat_statements 06
  • 12. Reducing B Tree Index Bloat
  • 13. BLOAT • Dead tuples - Bloat • Vacuum - Mark it as Reusable • Index page scan becomes huge • Unnecessary pages in RAM
  • 14. Heap Only Tuple • Version 8.3 • No need to update index
  • 15. Heap Only Tuple Index ID 1 2 3         ID Name 3 X 2 Y 1 Z                 Table
  • 16. Heap Only Tuple Index ID 1 2 3         ID Name 3 X 2 Y 1 Z                 Table
  • 17. Heap Only Tuple Index ID 1 2 3         ID Name 3 X 2 Y 1 Z          3  A     Table
  • 18. Heap Only Tuple Index ID 1 2 3    3     ID Name 3 X 2 Y 1 Z          3  A     Table
  • 19. Heap Only Tuple Index ID 1 2 3         ID Name 3 X 2 Y 1 Z                 Table
  • 20. Heap Only Tuple Index ID 1 2 3         ID Name 3 X 2 Y 1 Z          3  A     Table
  • 21. Heap Only Tuple Index ID 1 2 3         ID Name 3 X 2 Y 1 Z          3  A     Table
  • 22. Killed Index Tuples CREATE UNLOGGED TABLE test ( id integer PRIMARY KEY, val text NOT NULL ) WITH (autovacuum_enabled = off); INSERT INTO test SELECT i, 'text number ' || i FROM generate_series(1, 1000000) AS i; DELETE FROM test WHERE id BETWEEN 501 AND 799500; Analyze test;
  • 23. Killed Index Tuples test=# EXPLAIN (ANALYZE, BUFFERS, COSTS OFF, TIMING OFF) SELECT * FROM test WHERE id BETWEEN 1 AND 800000; QUERY PLAN ---------------------------------------------------------- ----- Index Scan using test_pkey on test (actual rows=1000 loops=1) Index Cond: ((id >= 1) AND (id <= 800000)) Buffers: shared hit=7284 Planning: Buffers: shared hit=25 Planning Time: 0.204 ms Execution Time: 95.223 ms (7 rows)
  • 24. Killed Index Tuples test=# EXPLAIN (ANALYZE, BUFFERS, COSTS OFF, TIMING OFF) SELECT * FROM test WHERE id BETWEEN 1 AND 800000; QUERY PLAN ---------------------------------------------------------- ----- Index Scan using test_pkey on test (actual rows=1000 loops=1) Index Cond: ((id >= 1) AND (id <= 800000)) Buffers: shared hit=2196 Planning: Buffers: shared hit=8 Planning Time: 0.196 ms Execution Time: 3.815 ms (7 rows)
  • 25. Bottom-Up Index Tuple Deletion • Introduced in Version 14 • Delete Index Entries pointed to dead tuples • Performing the operation of the VACUUM
  • 27. Logical Replication - Publisher/Subscriber Model Source/Publisher Destination/Subscriber WAL Records WAL Sender Logical Replication Launcher Publication Decoding Logical Messages Subscriber
  • 28. logical_decoding_work_mem - PostgreSQL 13 Source/Publisher Destination/Subscriber WAL Records WAL Sender Logical Replication Launcher Publication Decoding Logical Messages Subscriber logical_decoding_work_ mem
  • 29. Logical Decode Work Mem - PostgreSQL 13 Source/Publisher Destination/Subscriber WAL Records WAL Sender Logical Replication Launcher Publication Decoding Logical Messages Subscriber logical_decoding_work_ mem
  • 30. Streaming in-progress Transactions - PostgreSQL 14 Source/Publisher Destination/Subscriber WAL Records WAL Sender Logical Replication Launcher Publication Decoding Logical Messages Subscriber logical_decoding_work_ mem
  • 31. Streaming in-progress Transactions - PostgreSQL 14 [publisher configurations]: wal_level = logical synchronous_standby_names = '*' logical_decoding_work_mem=1MB
  • 32. Streaming in-progress Transactions - PostgreSQL 14 [publisher setup] CREATE TABLE test(id int ,name text); CREATE PUBLICATION test_pub FOR TABLE test; [subscription setup] CREATE TABLE test(id int ,name text); CREATE SUBSCRIPTION test_sub CONNECTION 'host=127.0.0.1 port=5432 dbname=postgres' PUBLICATION test_pub;
  • 33. Streaming in-progress Transactions - PostgreSQL 14 [publisher setup] CREATE TABLE test(id int ,name text); CREATE PUBLICATION test_pub FOR TABLE test; [subscription setup] CREATE TABLE test(id int ,name text); CREATE SUBSCRIPTION test_sub CONNECTION 'host=127.0.0.1 port=5432 dbname=postgres' PUBLICATION test_pub;
  • 34. Streaming in-progress Transactions - PostgreSQL 14 INSERT INTO test SELECT i, REPEAT('x', 10) FROM generate_series(1,5000000) AS i; Time Taken for Commit : 6.7 Secs
  • 35. Streaming in-progress Transactions - PostgreSQL 14 ALTER SUBSCRIPTION test_sub SET(STREAMING = ON) INSERT INTO test SELECT i, REPEAT('x', 10) FROM generate_series(1,5000000) AS i; Time taken for COMMIT: 3.5 secs
  • 37. TOAST • Mechanism used to avoid exceeding the size of data block • Try Compress • Wider field values into a smaller pieces • Default, this is 2KB • Every table will have its own toast table
  • 38. New Compression Configuration • pglz - in built compression • lz4 - New compression method added • New Configuration Variable - default_toast_compression • --with-lz4
  • 40. Idle_session_timeout postgres=# select name,setting,unit,user,pending_restart from pg_settings where name = 'idle_session_timeout' gx -[ RECORD 1 ]---+--------------------- name | idle_session_timeout setting | 0 unit | ms context | user pending_restart | f • New variable • Almost the same purpose of idle_in_transaction_session_timeout
  • 41. idle_in_transaction_session_timeout postgres=# select name,setting,unit,context,pending_restart from pg_settings where name = 'idle_in_transaction_session_timeout' gx -[ RECORD 1 ]---+------------------------------------ name | idle_in_transaction_session_timeout setting | 0 unit | ms context | user pending_restart | f • Available since PostgreSQL 9.6 • Maximum allowed idle time between queries, when in transaction
  • 42. idle_in_transaction_session_timeout postgres=# set idle_in_transaction_session_timeout=2000; SET postgres=# postgres=# start transaction; START TRANSACTION postgres=*# select name,setting from pg_settings where name = 'idle_in_transaction_session_timeout' gx -[ RECORD 1 ]-------------------------------- name | idle_in_transaction_session_timeout setting | 2000 • Opened a new session • Started a new transaction
  • 43. idle_in_transaction_session_timeout postgres=# start transaction; START TRANSACTION postgres=*# select name,setting from pg_settings where name = 'idle_in_transaction_session_timeout' gx -[ RECORD 1 ]-------------------------------- name | idle_in_transaction_session_timeout setting | 2000 postgres=*# select name,setting from pg_settings where name = 'idle_in_transaction_session_timeout' gx FATAL: terminating connection due to idle-in-transaction timeout server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request. The connection to the server was lost. Attempting reset: Succeeded. postgres=#
  • 44. Idle_session_timeout postgres=# set Idle_session_timeout = 5000; SET postgres=# select name,setting from pg_settings where name = 'idle_in_transaction_session_timeout' gx -[ RECORD 1 ]-------------------------------- name | idle_in_transaction_session_timeout setting | 0 postgres=# select name,setting from pg_settings where name = 'idle_in_transaction_session_timeout' gx FATAL: terminating connection due to idle-session timeout server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request. The connection to the server was lost. Attempting reset: Succeeded. • Maximum idle time between the queries, when not in a transaction
  • 45. Idle_session_timeout • Helps to handle idle connections • Apply only for the upcoming connections • Should have higher value than application timeout
  • 46. Improvement to Query Parallelism
  • 48. Materialized View • Database Object • Stores only the result of the query • Refresh needs to be done
  • 49. Materialized View (postgres13)=> CREATE TABLE table_test (grp int, data numeric); CREATE TABLE Time: 3.282 ms (postgres13)=> (postgres13)=> INSERT INTO table_test SELECT 100, random() FROM generate_series(1, 5000000); INSERT 0 5000000 Time: 8064.350 ms (00:08.064) (postgres13)=> (postgres13)=> INSERT INTO table_test SELECT 200, random() FROM generate_series(1, 5000000); INSERT 0 5000000 Time: 9250.536 ms (00:09.251) (postgres13)=> • Loaded data in PostgreSQL 13
  • 50. Materialized View (postgres14)=> CREATE TABLE table_test (grp int, data numeric); CREATE TABLE Time: 13.577 ms (postgres14)=> (postgres14)=> INSERT INTO table_test SELECT 100, random() FROM generate_series(1, 5000000); INSERT 0 5000000 Time: 7204.919 ms (00:07.205) (postgres14)=> (postgres14)=> INSERT INTO table_test SELECT 200, random() FROM generate_series(1, 5000000); INSERT 0 5000000 Time: 8618.068 ms (00:08.618) • Loaded data in PostgreSQL 14
  • 51. Materialized View (postgres13)=> SELECT grp, avg(data), count(*) FROM table_test GROUP BY 1; grp | avg | count -----+-------------------------+--------- 100 | 0.499979689549607876793 | 5000000 200 | 0.500123546167930387393 | 5000000 (2 rows) Time: 960.802 ms (postgres14)=> SELECT grp, avg(data), count(*) FROM table_test GROUP BY 1; grp | avg | count -----+-------------------------+--------- 100 | 0.500173019196875804072 | 5000000 200 | 0.500065682335289928739 | 5000000 (2 rows) Time: 965.867 ms • Executing a query on both version
  • 52. Materialized View (postgres13)=> CREATE MATERIALIZED VIEW test_view AS SELECT grp, avg(data), count(*) FROM table_test GROUP BY 1; SELECT 2 Time: 983.138 ms (postgres14)=> CREATE MATERIALIZED VIEW test_view AS SELECT grp, avg(data), count(*) FROM table_test GROUP BY 1; SELECT 2 Time: 978.225 ms (postgres14)=> • Creating materialized view for the static table
  • 53. Materialized View (postgres13)=> select * from test_view; grp | avg | count -----+-------------------------+--------- 100 | 0.499979689549607876793 | 5000000 200 | 0.500123546167930387393 | 5000000 (2 rows) Time: 0.561 ms (postgres14)=> select * from test_view; grp | avg | count -----+-------------------------+--------- 100 | 0.500173019196875804072 | 5000000 200 | 0.500065682335289928739 | 5000000 (2 rows) Time: 0.540 ms • Querying the materialized view instead of table
  • 54. Refresh Materialized View (postgres13)=> refresh materialized view test_view; REFRESH MATERIALIZED VIEW Time: 2027.911 ms (00:02.028) (postgres13)=> (postgres14)=> refresh materialized view test_view; REFRESH MATERIALIZED VIEW Time: 961.503 ms • Refreshing the materialized view to get the updated result
  • 55. Refresh Materialized View (postgres14)=> explain SELECT grp, avg(data), count(*) FROM table_test GROUP BY 1; QUERY PLAN ------------------------------------------------------------------ ------------------------------------ Finalize GroupAggregate (cost=127997.34..127997.87 rows=2 width=44) Group Key: grp -> Gather Merge (cost=127997.34..127997.81 rows=4 width=44) Workers Planned: 2 -> Sort (cost=126997.32..126997.32 rows=2 width=44) Sort Key: grp ...... on table_test (cost=0.00..95747.02 rows=4166702 width=15) (9 rows) Time: 0.438 ms • Explain plan with parallel workers
  • 56. Refresh Materialized View • Other than refresh, remaining use parallelism • For version < 14, it is better to recreate it than refresh • During refresh, it will lock the content • Concurrent option, but PK is mandatory
  • 60. Default Authentication Method - scram-sha-256 authentication method 10 not supported • Previously MD5 hashing method • Easy to reconstruct • Expensive Hash Function • Introduced in V10, now made it as default • Client needs to support it
  • 62. Predefined Roles - pg_read_all_data or pg_write_all_data • Be a superuser • Provide access to the complete database
  • 63. Predefined Roles - pg_read_all_data • Read all data(Tables, views and sequences) • Select access on objects and usage access on schemas
  • 64. Predefined Roles - pg_write_all_data • Write all data(Tables, views and sequences) • Insert, Update, Delete access on objects and usage access on schemas
  • 65. Predefined Roles - pg_read_all_data or pg_write_all_data (postgres14)=> create user testing; CREATE ROLE Time: 5.468 ms (postgres14)=> set role to 'testing'; SET Time: 0.392 ms (postgres14)=> SELECT grp, avg(data), count(*) FROM table_test GROUP BY 1; ERROR: permission denied for table table_test Time: 2.010 ms (postgres14)=>
  • 66. Predefined Roles - pg_read_all_data or pg_write_all_data (postgres14)=> reset role; RESET Time: 0.189 ms (postgres14)=> grant pg_read_all_data to testing; GRANT ROLE Time: 4.202 ms (postgres14)=> (postgres14)=> set role to 'testing'; SET Time: 0.148 ms (postgres14)=> SELECT grp, avg(data), count(*) FROM table_test GROUP BY 1; grp | avg | count -----+-------------------------+--------- 100 | 0.500173019196875804072 | 5000000 200 | 0.500065682335289928739 | 5000000 (2 rows)
  • 69. New JSON Syntax Old Syntax: where (jsonb_column->>'name')::varchar = 'Aakash: DBRE(2021)'; New Syntax: where jsonb_column['name'] = '"Aakash: DBRE(2021)"'; • Since PostgreSQL 9.2 • Had unique syntax
  • 71. Range • Beginning and End value • Datetime or Integer • This train is running between X and Y
  • 72. Range - Classic SQL create table track_activity ( id serial primary key, appname varchar(100), start_ts timestamp, end_ts timestamp ); insert into track_activity (appname,start_ts,end_ts) values ('Whatapp', '2021-10-01 4:00', '2021-10-01 7:00'); insert into track_activity (appname,start_ts,end_ts) values ('Facebook', '2021-10-01 7:00', '2021-10-01 11:00'); insert into track_activity (appname,start_ts,end_ts) values ('Whatsapp', '2021-10-01 11:00', '2021-10-01 21:00'); insert into track_activity (appname,start_ts,end_ts) values ('Instagram', '2021-10-01 22:00', '2021-10-01 23:00');
  • 73. Range - Classic SQL (postgres14)=> select * from track_activity; id | appname | start_ts | end_ts ----+-----------+---------------------+--------------------- 1 | Whatapp | 2021-10-20 04:00:00 | 2021-10-20 07:00:00 2 | Facebook | 2021-10-20 07:00:00 | 2021-10-20 11:00:00 3 | Whatsapp | 2021-10-20 11:00:00 | 2021-10-20 21:00:00 4 | Instagram | 2021-10-20 22:00:00 | 2021-10-20 23:00:00 (4 rows)
  • 74. Range - Classic SQL (postgres14)=> select id, appname from track_activity where start_ts <= '2021-10-20 11:00'::timestamp and end_ts >= '2021-10- 20 12:00'::timestamp; id | appname ----+---------- 3 | Whatsapp (1 row)
  • 75. Range - Classic SQL (postgres14)=> select id, appname from track_activity where (start_ts <= '2021-10-20 11:00'::timestamp and end_ts >= '2021-10- 20 12:00'::timestamp) or (start_ts <= '2021-10-20 17:00'::timestamp and end_ts >= '2021-10-20 18:00'::timestamp); id | appname ----+---------- 3 | Whatsapp (1 row)
  • 76. Range - Using Range Data Type create table track_activity_range ( id serial primary key, appname varchar(100), time_ts tsrange ); insert into track_activity_range (appname,time_ts) values ('Whatapp', '[2021-10-20 4:00, 2021-10-20 7:00]'); insert into track_activity_range (appname,time_ts) values ('Facebook', '[2021-10-20 7:00, 2021-10-20 11:00]'); insert into track_activity_range (appname,time_ts) values ('Whatsapp', '[2021-10-20 11:00, 2021-10-20 21:00]'); insert into track_activity_range (appname,time_ts) values ('Instagram', '[2021-10-20 22:00, 2021-10-20 23:00]');
  • 77. Range - Using Range Data Type (postgres14)=> select id, appname from track_activity_range where time_ts @> '2021-10-20 12:00'::timestamp; id | appname ----+---------- 3 | Whatsapp (1 row)
  • 78. Range - Using Range Data Type (postgres14)=> select * from track_activity_range where id=3; id | appname | time_ts ----+----------+----------------------------------------------- 3 | Whatsapp | ["2021-10-20 11:00:00","2021-10-20 21:00:00"] (1 row) (postgres14)=> update track_activity_range set time_ts = time_ts - '[2021-10-20 11:00,2021-10-20 11:30]' where id = 3; UPDATE 1 (postgres14)=> select * from track_activity_range where id=3; id | appname | time_ts ----+----------+----------------------------------------------- 3 | Whatsapp | ("2021-10-20 11:30:00","2021-10-20 21:00:00"] (1 row)
  • 79. Range - Using Range Data Type (postgres14)=> select * from track_activity_range where id=3; id | appname | time_ts ----+----------+----------------------------------------------- 3 | Whatsapp | ("2021-10-20 11:30:00","2021-10-20 21:00:00"] (1 row) (postgres14)=> update track_activity_range set time_ts = time_ts - '[2021-10-20 12:00,2021-10-20 12:30]' where id = 3; ERROR: result of range difference would not be contiguous Time: 0.594 ms (postgres14)=>
  • 80. Range - Using Multi Range Data Type create table track_activity_multirange ( id serial primary key, appname varchar(100), time_ts tsmultirange ); insert into track_activity_multirange (appname,time_ts) values ('Whatapp', '{[2021-10-20 4:00, 2021-10-20 7:00]}'); insert into track_activity_multirange (appname,time_ts) values ('Facebook', '{[2021-10-20 7:00, 2021-10-20 11:00]}'); insert into track_activity_multirange (appname,time_ts) values ('Whatsapp', '{[2021-10-20 11:00, 2021-10-20 21:00]}'); insert into track_activity_multirange (appname,time_ts) values ('Instagram', '{[2021-10-20 22:00, 2021-10-20 23:00]}');
  • 81. Range - Using Multi Range Data Type (postgres14)=> select * from track_activity_multirange; id | appname | time_ts ----+-----------+------------------------------------------------- 1 | Whatapp | {["2021-10-20 04:00:00","2021-10-20 07:00:00"]} 2 | Facebook | {["2021-10-20 07:00:00","2021-10-20 11:00:00"]} 3 | Whatsapp | {["2021-10-20 11:00:00","2021-10-20 21:00:00"]} 4 | Instagram | {["2021-10-20 22:00:00","2021-10-20 23:00:00"]} (4 rows)
  • 82. Range - Using Multi Range Data Type (postgres14)=> update track_activity_multirange set time_ts = time_ts - '{[2021-10-20 12:00,2021-10-20 12:30]}' where id = 3; UPDATE 1 (postgres14)=> select * from track_activity_multirange where id=3; id | appname | time_ts ----+----------+-------------------------------------------------- --------------------------------------------- 3 | Whatsapp | {["2021-10-20 11:00:00","2021-10-20 12:00:00"), ("2021-10-20 12:30:00","2021-10-20 21:00:00"]} (1 row)
  • 83. OUT Parameter for Stored Procedure
  • 84. Out Parameter for Stored Procedure • Only function has been supported • Stored Procedure since V11 • Only supports IN , INOUT parameters • More compatible with Oracle's Implementation of procedures
  • 87. pg_stat_progress_copy ppostgres=# select * from pg_stat_progress_copy watch 1 Tue Oct 19 13:25:45 2021 (every 1s) pid | datid | datname | relid | command | type | bytes_processed | bytes_total | tuples_processed | tuples_excluded -----+-------+---------+-------+---------+------+------------- ----+-------------+------------------+----------------- (0 rows) • Reporting progress of COPY commands
  • 88. pg_stat_progress_copy postgres=# copy ( select i, now() - random() * '2 years'::interval, random() from generate_series(1,10000000) i) to /dev/null; • Session 1 - Load CSV with 10M records
  • 89. pg_stat_progress_copy Tue Oct 19 13:31:31 2021 (every 1s) pid | datid | datname | relid | command | type | bytes_processed | bytes_total | tuples_processed | tuples_excluded -------+-------+----------+-------+---------+------+---------- -------+-------------+------------------+----------------- 28382 | 14486 | postgres | 0 | COPY TO | PIPE | 2267809 | 0 | 41232 | 0 (1 row) • Session 2 - Monitoring pg_stat_progress_copy table continuously
  • 90. pg_stat_progress_copy datname | command | tuples_processed ----------+---------+------------------ postgres | COPY TO | 0 (1 row) Tue Oct 19 13:37:24 2021 (every 1s) datname | command | tuples_processed ----------+---------+------------------ postgres | COPY TO | 704370 (1 row) Tue Oct 19 13:37:25 2021 (every 1s) datname | command | tuples_processed ----------+---------+------------------ postgres | COPY TO | 1612781 (1 row)
  • 91. pg_stat_progress_copy postgres=# copy ( select i, now() - random() * '2 years'::interval, random() from generate_series(1,10000000) i) to /dev/null; COPY 10000000 postgres=# datname | command | tuples_processed ---------+---------+------------------ (0 rows) Tue Oct 19 13:40:26 2021 (every 1s) datname | command | tuples_processed ---------+---------+------------------ (0 rows)
  • 93. pg_stat_statements • Very useful extension • Tracking execution statistics of all SQL statements • Inexpensive • Supports in all infra • Load this data in grafana • Stores the hash value
  • 95. pg_stat_statements • When the given queries happened • Queries with exact values
  • 96. Enhancements in pg_stat_statements postgres=# select * from pg_stat_statements_info; dealloc | stats_reset ---------+------------------------------- 0 | 2021-10-18 14:28:12.693117+00 (1 row) • System View - pg_stat_statements_info • Shows since the query stats are available
  • 97. Enhancements in pg_stat_statements postgres=# select queryid,query from pg_stat_statements limit 1; queryid | query ---------------------+--------------------------------------------- - 2403671352246453279 | select queryid,query from pg_stat_statements (1 row) In Error log: 2021-10-18 14:55:15 UTC [10840]: [2-1] user=postgres,db=postgres,app=psql,client= [local],query_id=2403671352246453279 LOG: duration: 3.580 ms • New log prefix added - %Q • Also store the query id in the log file as well
  • 98. Enhancements in pg_stat_statements postgres=# select query,rows from pg_stat_statements limit 1; query | rows ----------------------------------------------+------ select queryid,query from pg_stat_statements | 7 (1 row) • New column added - rows • No of rows processed
  • 99. Planning for Upgrade • Logical / Physical Upgrade • Perform dry-run if possible • Version < 12 requires reindexing. • Analyze needs to be performed • Perform functional testing of application
  • 100. References • https://sql-info.de/postgresql/postgresql-14/articles- about-new-features-in-postgresql-14.html • https://www.cybertec-postgresql.com/en/index-bloat- reduced-in-postgresql-v14/ • https://www.postgresql.org/about/news/postgresql-14- released-2318/ • https://www.enterprisedb.com/blog/logical-decoding-large- progress-transactions-postgresql
  • 101. Reach Us : Info@mydbops.com Thank You