SlideShare a Scribd company logo
1 of 47
Download to read offline
Performance Archaeology 
Tomáš Vondra, GoodData 
tomas.vondra@gooddata.com / tomas@pgaddict.com 
@fuzzycz, http://blog.pgaddict.com 
Photo by Jason Quinlan, Creative Commons CC-BY-NC-SA 
https://www.flickr.com/photos/catalhoyuk/9400568431
How did the PostgreSQL performance 
evolve over the time? 
7.4 released 2003, i.e. ~10 years
(surprisingly) tricky question 
● usually “partial” tests during development 
– compare two versions / commits 
– focused on a particular part of the code / feature 
● more complex benchmarks compare two versions 
– difficult to “combine” (different hardware, ...) 
● application performance (ultimate benchmark) 
– apps are subject to (regulard) hardware upgrades 
– amounts of data grow, applications evolve (new features)
(somehow) unfair question 
● we do develop within context of the current hardware 
– How much RAM did you use 10 years ago? 
– Who of you had SSD/NVRAM drives 10 years ago? 
– How common were machines with 8 cores? 
● some differences are consequence of these changes 
● a lot of stuff was improved outside PostgreSQL (ext3 -> ext4) 
Better performance on current 
hardware is always nice ;-)
Let's do some benchmarks!
short version: 
We're much faster and more scalable.
If you're scared of numbers or charts, 
you should probably leave now.
http://blog.pgaddict.com 
http://planet.postgresql.org 
http://slidesha.re/1CUv3xO
Benchmarks (overview) 
● pgbench (TPC-B) 
– “transactional” benchmark 
– operations work with small row sets (access through PKs, ...) 
● TPC-DS (replaces TPC-H) 
– “warehouse” benchmark 
– queries chewing large amounts of data (aggregations, joins, 
ROLLUP/CUBE, ...) 
● fulltext benchmark (tsearch2) 
– primarily about improvements of GIN/GiST indexes 
– now just fulltext, there are many other uses for GIN/GiST (geo, ...)
Hardware used 
HP DL380 G5 (2007-2009) 
● 2x Xeon E5450 (each 4 cores @ 3GHz, 12MB cache) 
● 16GB RAM (FB-DIMM DDR2 667 MHz), FSB 1333 MHz 
● S3700 100GB (SSD) 
● 6x10k RAID10 (SAS) @ P400 with 512MB write cache 
● Scientific Linux 6.5 / kernel 2.6.32, ext4
pgbench 
TPC-B “transactional” benchmark
pgbench 
● three dataset sizes 
– small (150 MB) 
– medium (~50% RAM) 
– large (~200% RAM) 
● two modes 
– read-only and read-write 
● client counts (1, 2, 4, ..., 32) 
● 3 runs / 30 minute each (per combination)
pgbench 
● three dataset sizes 
– small (150 MB) <– locking issues, etc. 
– medium (~50% RAM) <– CPU bound 
– large (~200% RAM) <– I/O bound 
● two modes 
– read-only and read-write 
● client counts (1, 2, 4, ..., 32) 
● 3 runs / 30 minute each (per combination)
BEGIN; 
UPDATE accounts SET abalance = abalance + :delta 
WHERE aid = :aid; 
SELECT abalance FROM accounts WHERE aid = :aid; 
UPDATE tellers SET tbalance = tbalance + :delta 
WHERE tid = :tid; 
UPDATE branches SET bbalance = bbalance + :delta 
WHERE bid = :bid; 
INSERT INTO history (tid, bid, aid, delta, mtime) 
VALUES (:tid, :bid, :aid, :delta, CURRENT_TIMESTAMP); 
END;
0 5 10 15 20 25 30 35 
12000 
10000 
8000 
6000 
4000 
2000 
0 
pgbench / large read-only (on SSD) 
HP DL380 G5 (2x Xeon E5450, 16 GB DDR2 RAM), Intel S3700 100GB SSD 
number of clients 
7.4 8.0 8.1 head 
transactions per second
0 5 10 15 20 25 30 35 
80000 
70000 
60000 
50000 
40000 
30000 
20000 
10000 
0 
pgbench / medium read-only (SSD) 
HP DL380 G5 (2x Xeon E5450, 16 GB DDR2 RAM), Intel S3700 100GB SSD 
number of clients 
7.4 8.0 8.1 8.2 8.3 9.0 9.2 
transactions per second
0 5 10 15 20 25 30 35 
3000 
2500 
2000 
1500 
1000 
500 
0 
pgbench / large read-write (SSD) 
HP DL380 G5 (2x Xeon E5450, 16 GB DDR2 RAM), Intel S3700 100GB SSD 
number of clients 
7.4 8.1 8.3 9.1 9.2 
transactions per second
0 5 10 15 20 25 30 35 
6000 
5000 
4000 
3000 
2000 
1000 
0 
pgbench / small read-write (SSD) 
HP DL380 G5 (2x Xeon E5450, 16 GB DDR2 RAM), Intel S3700 100GB SSD 
number of clients 
7.4 8.0 8.1 8.2 8.3 8.4 9.0 9.2 
transactions per second
What about rotational drives? 
6 x 10k SAS drives (RAID 10) 
P400 with 512MB write cache
0 5 10 15 20 25 30 
800 
700 
600 
500 
400 
300 
200 
100 
0 
pgbench / large read-write (SAS) 
HP DL380 G5 (2x Xeon E5450, 16 GB DDR2 RAM), 6x 10k SAS RAID10 
number of clients 
7.4 (sas) 8.4 (sas) 9.4 (sas) 
transaction per second
What about a different machine?
Alternative hardware 
Workstation i5 (2011-2013) 
● 1x i5-2500k (4 cores @ 3.3 GHz, 6MB cache) 
● 8GB RAM (DIMM DDR3 1333 MHz) 
● S3700 100GB (SSD) 
● Gentoo, kernel 3.12, ext4
0 5 10 15 20 25 30 35 
40000 
35000 
30000 
25000 
20000 
15000 
10000 
5000 
0 
pgbench / large read-only (Xeon vs. i5) 
2x Xeon E5450 (3GHz), 16 GB DDR2 RAM, Intel S3700 100GB SSD 
i5-2500k (3.3 GHz), 8GB DDR3 RAM, Intel S3700 100GB SSD 
number of clients 
7.4 (Xeon) 9.0 (Xeon) 7.4 (i5) 9.0 (i5) 
transactions per second
0 5 10 15 20 25 30 35 
100000 
80000 
60000 
40000 
20000 
0 
pgbench / small read-only (Xeon vs. i5) 
2x Xeon E5450 (3GHz), 16 GB DDR2 RAM, Intel S3700 100GB SSD 
i5-2500k (3.3 GHz), 8GB DDR3 RAM, Intel S3700 100GB SSD 
number of clients 
7.4 (Xeon) 9.0 (Xeon) 9.4 (Xeon) 
7.4 (i5) 9.0 (i5) 9.4 (i5) 
transactions per second
0 5 10 15 20 25 30 35 
8000 
7000 
6000 
5000 
4000 
3000 
2000 
1000 
0 
pgbench / small read-write (Xeon vs. i5) 
2x Xeon E5450 (3GHz), 16 GB DDR2 RAM, Intel S3700 100GB SSD 
i5-2500k (3.3 GHz), 8GB DDR3 RAM, Intel S3700 100GB SSD 
number of clients 
7.4 (Xeon) 9.4 (Xeon) 7.4 (i5) 9.4b1 (i5) 
transactions per second
Legends say older version 
work better with lower memory limits 
(shared_buffers etc.)
0 5 10 15 20 25 30 35 
40000 
35000 
30000 
25000 
20000 
15000 
10000 
5000 
0 
pgbench / large read-only (i5-2500) 
different sizes of shared_buffers (128MB vs. 2GB) 
number of clients 
7.4 7.4 (small) 8.0 
8.0 (small) 9.4b1 
transactions per second
pgbench / summary 
● much better 
● improved locking 
– much better scalability to a lot of cores (>= 64) 
● a lot of different optimizations 
– significant improvements even for small client counts 
● lessons learned 
– CPU frequency is very poor measure 
– similarly for number of cores etc.
TPC-DS 
“Decision Support” benchmark 
(aka “Data Warehouse” benchmark)
TPC-DS 
● analytics workloads / warehousing 
– queries processing large data sets (GROUP BY, JOIN) 
– non-uniform distribution (more realistic than TPC-H) 
● 99 query templates defined (TPC-H just 22) 
– some broken (failing generator) 
– some unsupported (e.g. ROLLUP/CUBE) 
– 41 queries >= 7.4 
– 61 queries >= 8.4 (CTE, Window functions) 
– no query rewrites
TPC-DS 
● 1GB and 16GB datasets (raw data) 
– 1GB insufficient for publication, 16GB nonstandard (according to TPC) 
● interesting anyways ... 
– a lot of databases fit into 16GB 
– shows trends (applicable to large DBs) 
● schema 
– pretty much default (standard compliance FTW!) 
– same for all versions (indexes K/join keys, a few more indexes) 
– definitely room for improvements (per version, ...)
8.0 8.1 8.2 8.3 8.4 9.0 9.1 9.2 9.3 9.4 head 
7000 
6000 
5000 
4000 
3000 
2000 
1000 
0 
TPC DS / database size per 1GB raw data 
data indexes 
size [MB]
8.0 8.1 8.2 8.3 8.4 9.0 9.1 9.2 9.3 9.4 head 
1400 
1200 
1000 
800 
600 
400 
200 
0 
TPC DS / load duration (1GB) 
copy indexes vacuum full vacuum freeze analyze 
duration [s]
8.0 8.1 8.2 8.3 8.4 9.0 9.1 9.2 9.3 9.4 head 
1200 
1000 
800 
600 
400 
200 
0 
TPC DS / load duration (1GB) 
copy indexes vacuum freeze analyze 
duration [s]
8.0 8.1 8.2 8.3 8.4 9.0 9.1 9.2 9.3 9.4 
9000 
8000 
7000 
6000 
5000 
4000 
3000 
2000 
1000 
0 
TPC DS / load duration (16 GB) 
LOAD INDEXES VACUUM FREEZE ANALYZE duration [seconds]
8.0 8.1 8.2 8.3 8.4 9.0 9.1 9.2 9.3 9.4 head 
350 
300 
250 
200 
150 
100 
50 
0 
TPC DS / duration (1GB) 
average duration of 41 queries 
seconds
8.0* 8.1 8.2 8.3 8.4 9.0 9.1 9.2 9.3 9.4 
6000 
5000 
4000 
3000 
2000 
1000 
0 
TPC DS / duration (16 GB) 
average duration of 41 queries 
version 
duration [seconds]
TPC-DS / summary 
● data load much faster 
– most of the time spent on indexes (parallelize, RAM) 
– ignoring VACUUM FULL (different implementation 9.0) 
– slightly less space occupied 
● much faster queries 
– in total the speedup is ~6x 
– wider index usage, index only scans
Fulltext Benchmark 
testing GIN and GiST indexes 
through fulltext search
Fulltext benchmark 
● searching through pgsql mailing list archives 
– ~1M messages, ~5GB of data 
● ~33k real-world queries (from postgresql.org) 
– syntetic queries lead to about the same results 
SELECT id FROM messages 
WHERE body @@ ('high & performance')::tsquery 
ORDER BY ts_rank(body, ('high & performance')::tsquery) 
DESC LIMIT 100;
2000 
1800 
1600 
1400 
1200 
1000 
800 
600 
400 
200 
0 
Fulltext benchmark / load 
COPY / with indexes and PL/pgSQL triggers 
COPY VACUUM FREEZE ANALYZE 
duration [sec]
8.0 8.1 8.2 8.3 8.4 9.0 9.1 9.2 9.3 9.4 
6000 
5000 
4000 
3000 
2000 
1000 
0 
Fulltext benchmark / GiST 
33k queries from postgresql.org [TOP 100] 
total runtime [sec]
8.2 8.3 8.4 9.0 9.1 9.2 9.3 9.4 
800 
700 
600 
500 
400 
300 
200 
100 
0 
Fulltext benchmark / GIN 
33k queries from postgresql.org [TOP 100] 
total runtime [sec]
8.0 8.1 8.2 8.3 8.4 9.0 9.1 9.2 9.3 9.4 
6000 
5000 
4000 
3000 
2000 
1000 
0 
Fulltext benchmark - GiST vs. GIN 
33k queries from postgresql.org [TOP 100] 
GiST GIN 
total duration [sec]
9.4 durations, divided by 9.3 durations (e.g. 0.1 means 10x speedup) 
1.8 
1.6 
1.4 
1.2 
1 
0.8 
0.6 
0.4 
0.2 
0 
Fulltext benchmark / 9.3 vs. 9.4 (GIN fastscan) 
0.1 1 10 100 1000 
duration on 9.3 [miliseconds, log scale] 
9.4 duration (relative to 9.3)
Fulltext / summary 
● GIN fastscan 
– queries combining “frequent & rare” 
– 9.4 scans “frequent” posting lists first 
– exponential speedup for such queries 
– ... which is quite nice ;-) 
● only ~5% queries slowed down 
– mostly queries below 1ms (measurement error)
http://blog.pgaddict.com 
http://planet.postgresql.org 
http://slidesha.re/1CUv3xO

More Related Content

What's hot

In-core compression: how to shrink your database size in several times
In-core compression: how to shrink your database size in several timesIn-core compression: how to shrink your database size in several times
In-core compression: how to shrink your database size in several times
Aleksander Alekseev
 

What's hot (20)

MySQL Tokudb engine benchmark
MySQL Tokudb engine benchmarkMySQL Tokudb engine benchmark
MySQL Tokudb engine benchmark
 
In-core compression: how to shrink your database size in several times
In-core compression: how to shrink your database size in several timesIn-core compression: how to shrink your database size in several times
In-core compression: how to shrink your database size in several times
 
Nvmfs benchmark
Nvmfs benchmarkNvmfs benchmark
Nvmfs benchmark
 
Bluestore
BluestoreBluestore
Bluestore
 
Postgres-BDR with Google Cloud Platform
Postgres-BDR with Google Cloud PlatformPostgres-BDR with Google Cloud Platform
Postgres-BDR with Google Cloud Platform
 
PostgreSQL
PostgreSQLPostgreSQL
PostgreSQL
 
Performance improvements in PostgreSQL 9.5 and beyond
Performance improvements in PostgreSQL 9.5 and beyondPerformance improvements in PostgreSQL 9.5 and beyond
Performance improvements in PostgreSQL 9.5 and beyond
 
Comparison of-foss-distributed-storage
Comparison of-foss-distributed-storageComparison of-foss-distributed-storage
Comparison of-foss-distributed-storage
 
[Pgday.Seoul 2017] 3. PostgreSQL WAL Buffers, Clog Buffers Deep Dive - 이근오
[Pgday.Seoul 2017] 3. PostgreSQL WAL Buffers, Clog Buffers Deep Dive - 이근오[Pgday.Seoul 2017] 3. PostgreSQL WAL Buffers, Clog Buffers Deep Dive - 이근오
[Pgday.Seoul 2017] 3. PostgreSQL WAL Buffers, Clog Buffers Deep Dive - 이근오
 
Ceph Day KL - Bluestore
Ceph Day KL - Bluestore Ceph Day KL - Bluestore
Ceph Day KL - Bluestore
 
XtraDB 5.7: key performance algorithms
XtraDB 5.7: key performance algorithmsXtraDB 5.7: key performance algorithms
XtraDB 5.7: key performance algorithms
 
Being closer to Cassandra by Oleg Anastasyev. Talk at Cassandra Summit EU 2013
Being closer to Cassandra by Oleg Anastasyev. Talk at Cassandra Summit EU 2013Being closer to Cassandra by Oleg Anastasyev. Talk at Cassandra Summit EU 2013
Being closer to Cassandra by Oleg Anastasyev. Talk at Cassandra Summit EU 2013
 
My sql fabric ha and sharding solutions
My sql fabric ha and sharding solutionsMy sql fabric ha and sharding solutions
My sql fabric ha and sharding solutions
 
100500 способов кэширования в Oracle Database или как достичь максимальной ск...
100500 способов кэширования в Oracle Database или как достичь максимальной ск...100500 способов кэширования в Oracle Database или как достичь максимальной ск...
100500 способов кэширования в Oracle Database или как достичь максимальной ск...
 
MySQL async message subscription platform
MySQL async message subscription platformMySQL async message subscription platform
MySQL async message subscription platform
 
Новые возможности полнотекстового поиска в PostgreSQL / Олег Бартунов (Postgr...
Новые возможности полнотекстового поиска в PostgreSQL / Олег Бартунов (Postgr...Новые возможности полнотекстового поиска в PostgreSQL / Олег Бартунов (Postgr...
Новые возможности полнотекстового поиска в PostgreSQL / Олег Бартунов (Postgr...
 
pg / shardman: шардинг в PostgreSQL на основе postgres / fdw, pg / pathman и ...
pg / shardman: шардинг в PostgreSQL на основе postgres / fdw, pg / pathman и ...pg / shardman: шардинг в PostgreSQL на основе postgres / fdw, pg / pathman и ...
pg / shardman: шардинг в PostgreSQL на основе postgres / fdw, pg / pathman и ...
 
Mastering PostgreSQL Administration
Mastering PostgreSQL AdministrationMastering PostgreSQL Administration
Mastering PostgreSQL Administration
 
Exploiting Your File System to Build Robust & Efficient Workflows
Exploiting Your File System to Build Robust & Efficient WorkflowsExploiting Your File System to Build Robust & Efficient Workflows
Exploiting Your File System to Build Robust & Efficient Workflows
 
MyAWR another mysql awr
MyAWR another mysql awrMyAWR another mysql awr
MyAWR another mysql awr
 

Viewers also liked

Viewers also liked (12)

GPGPU Accelerates PostgreSQL (English)
GPGPU Accelerates PostgreSQL (English)GPGPU Accelerates PostgreSQL (English)
GPGPU Accelerates PostgreSQL (English)
 
淺入淺出 MySQL & PostgreSQL
淺入淺出 MySQL & PostgreSQL淺入淺出 MySQL & PostgreSQL
淺入淺出 MySQL & PostgreSQL
 
12-Step Program for Scaling Web Applications on PostgreSQL
12-Step Program for Scaling Web Applications on PostgreSQL12-Step Program for Scaling Web Applications on PostgreSQL
12-Step Program for Scaling Web Applications on PostgreSQL
 
PostgreSQL 9.6 Performance-Scalability Improvements
PostgreSQL 9.6 Performance-Scalability ImprovementsPostgreSQL 9.6 Performance-Scalability Improvements
PostgreSQL 9.6 Performance-Scalability Improvements
 
Lessons PostgreSQL learned from commercial databases, and didn’t
Lessons PostgreSQL learned from commercial databases, and didn’tLessons PostgreSQL learned from commercial databases, and didn’t
Lessons PostgreSQL learned from commercial databases, and didn’t
 
What’s New in Amazon Aurora for MySQL and PostgreSQL
What’s New in Amazon Aurora for MySQL and PostgreSQLWhat’s New in Amazon Aurora for MySQL and PostgreSQL
What’s New in Amazon Aurora for MySQL and PostgreSQL
 
PostgreSQL Security. How Do We Think?
PostgreSQL Security. How Do We Think?PostgreSQL Security. How Do We Think?
PostgreSQL Security. How Do We Think?
 
(JP) GPGPUがPostgreSQLを加速する
(JP) GPGPUがPostgreSQLを加速する(JP) GPGPUがPostgreSQLを加速する
(JP) GPGPUがPostgreSQLを加速する
 
Full Text Search In PostgreSQL
Full Text Search In PostgreSQLFull Text Search In PostgreSQL
Full Text Search In PostgreSQL
 
第51回NDS PostgreSQLのデータ型 #nds51
第51回NDS PostgreSQLのデータ型 #nds51第51回NDS PostgreSQLのデータ型 #nds51
第51回NDS PostgreSQLのデータ型 #nds51
 
5 Steps to PostgreSQL Performance
5 Steps to PostgreSQL Performance5 Steps to PostgreSQL Performance
5 Steps to PostgreSQL Performance
 
Open Source SQL databases enter millions queries per second era
Open Source SQL databases enter millions queries per second eraOpen Source SQL databases enter millions queries per second era
Open Source SQL databases enter millions queries per second era
 

Similar to PostgreSQL performance archaeology

Cерверы Depo storm 3400 на базе новейших процессоров intel xeon e5 2600v3 fin
Cерверы Depo storm 3400 на базе новейших процессоров intel xeon e5 2600v3 finCерверы Depo storm 3400 на базе новейших процессоров intel xeon e5 2600v3 fin
Cерверы Depo storm 3400 на базе новейших процессоров intel xeon e5 2600v3 fin
DEPO Computers
 
Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...
Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...
Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...
Red_Hat_Storage
 

Similar to PostgreSQL performance archaeology (20)

Speedrunning the Open Street Map osm2pgsql Loader
Speedrunning the Open Street Map osm2pgsql LoaderSpeedrunning the Open Street Map osm2pgsql Loader
Speedrunning the Open Street Map osm2pgsql Loader
 
LUG 2014
LUG 2014LUG 2014
LUG 2014
 
Accelerating HBase with NVMe and Bucket Cache
Accelerating HBase with NVMe and Bucket CacheAccelerating HBase with NVMe and Bucket Cache
Accelerating HBase with NVMe and Bucket Cache
 
Accelerating hbase with nvme and bucket cache
Accelerating hbase with nvme and bucket cacheAccelerating hbase with nvme and bucket cache
Accelerating hbase with nvme and bucket cache
 
IO Dubi Lebel
IO Dubi LebelIO Dubi Lebel
IO Dubi Lebel
 
Cерверы Depo storm 3400 на базе новейших процессоров intel xeon e5 2600v3 fin
Cерверы Depo storm 3400 на базе новейших процессоров intel xeon e5 2600v3 finCерверы Depo storm 3400 на базе новейших процессоров intel xeon e5 2600v3 fin
Cерверы Depo storm 3400 на базе новейших процессоров intel xeon e5 2600v3 fin
 
3.INTEL.Optane_on_ceph_v2.pdf
3.INTEL.Optane_on_ceph_v2.pdf3.INTEL.Optane_on_ceph_v2.pdf
3.INTEL.Optane_on_ceph_v2.pdf
 
Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...
Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...
Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...
 
Ceph Day Beijing - Optimizing Ceph performance by leveraging Intel Optane and...
Ceph Day Beijing - Optimizing Ceph performance by leveraging Intel Optane and...Ceph Day Beijing - Optimizing Ceph performance by leveraging Intel Optane and...
Ceph Day Beijing - Optimizing Ceph performance by leveraging Intel Optane and...
 
Adventures in RDS Load Testing
Adventures in RDS Load TestingAdventures in RDS Load Testing
Adventures in RDS Load Testing
 
Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...
Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...
Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...
 
High performance json- postgre sql vs. mongodb
High performance json- postgre sql vs. mongodbHigh performance json- postgre sql vs. mongodb
High performance json- postgre sql vs. mongodb
 
HeroLympics Eng V03 Henk Vd Valk
HeroLympics  Eng V03 Henk Vd ValkHeroLympics  Eng V03 Henk Vd Valk
HeroLympics Eng V03 Henk Vd Valk
 
計算力学シミュレーションに GPU は役立つのか?
計算力学シミュレーションに GPU は役立つのか?計算力学シミュレーションに GPU は役立つのか?
計算力学シミュレーションに GPU は役立つのか?
 
Machbase_Edge_Edition_v2.pdf
Machbase_Edge_Edition_v2.pdfMachbase_Edge_Edition_v2.pdf
Machbase_Edge_Edition_v2.pdf
 
Challenges and Trends of SSD Design
Challenges and Trends of SSD DesignChallenges and Trends of SSD Design
Challenges and Trends of SSD Design
 
Ceph Day Tokyo - Delivering cost effective, high performance Ceph cluster
Ceph Day Tokyo - Delivering cost effective, high performance Ceph clusterCeph Day Tokyo - Delivering cost effective, high performance Ceph cluster
Ceph Day Tokyo - Delivering cost effective, high performance Ceph cluster
 
Running Apache Spark on a High-Performance Cluster Using RDMA and NVMe Flash ...
Running Apache Spark on a High-Performance Cluster Using RDMA and NVMe Flash ...Running Apache Spark on a High-Performance Cluster Using RDMA and NVMe Flash ...
Running Apache Spark on a High-Performance Cluster Using RDMA and NVMe Flash ...
 
Ceph Day KL - Delivering cost-effective, high performance Ceph cluster
Ceph Day KL - Delivering cost-effective, high performance Ceph clusterCeph Day KL - Delivering cost-effective, high performance Ceph cluster
Ceph Day KL - Delivering cost-effective, high performance Ceph cluster
 
SQL Server It Just Runs Faster
SQL Server It Just Runs FasterSQL Server It Just Runs Faster
SQL Server It Just Runs Faster
 

More from Tomas Vondra

More from Tomas Vondra (14)

CREATE STATISTICS - What is it for? (PostgresLondon)
CREATE STATISTICS - What is it for? (PostgresLondon)CREATE STATISTICS - What is it for? (PostgresLondon)
CREATE STATISTICS - What is it for? (PostgresLondon)
 
Data corruption
Data corruptionData corruption
Data corruption
 
CREATE STATISTICS - what is it for?
CREATE STATISTICS - what is it for?CREATE STATISTICS - what is it for?
CREATE STATISTICS - what is it for?
 
DB vs. encryption
DB vs. encryptionDB vs. encryption
DB vs. encryption
 
PostgreSQL na EXT4, XFS, BTRFS a ZFS / OpenAlt
PostgreSQL na EXT4, XFS, BTRFS a ZFS / OpenAltPostgreSQL na EXT4, XFS, BTRFS a ZFS / OpenAlt
PostgreSQL na EXT4, XFS, BTRFS a ZFS / OpenAlt
 
Postgresql na EXT3/4, XFS, BTRFS a ZFS
Postgresql na EXT3/4, XFS, BTRFS a ZFSPostgresql na EXT3/4, XFS, BTRFS a ZFS
Postgresql na EXT3/4, XFS, BTRFS a ZFS
 
Novinky v PostgreSQL 9.4 a JSONB
Novinky v PostgreSQL 9.4 a JSONBNovinky v PostgreSQL 9.4 a JSONB
Novinky v PostgreSQL 9.4 a JSONB
 
Výkonnostní archeologie
Výkonnostní archeologieVýkonnostní archeologie
Výkonnostní archeologie
 
Český fulltext a sdílené slovníky
Český fulltext a sdílené slovníkyČeský fulltext a sdílené slovníky
Český fulltext a sdílené slovníky
 
SSD vs HDD / WAL, indexes and fsync
SSD vs HDD / WAL, indexes and fsyncSSD vs HDD / WAL, indexes and fsync
SSD vs HDD / WAL, indexes and fsync
 
Checkpoint (CSPUG 22.11.2011)
Checkpoint (CSPUG 22.11.2011)Checkpoint (CSPUG 22.11.2011)
Checkpoint (CSPUG 22.11.2011)
 
Čtení explain planu (CSPUG 21.6.2011)
Čtení explain planu (CSPUG 21.6.2011)Čtení explain planu (CSPUG 21.6.2011)
Čtení explain planu (CSPUG 21.6.2011)
 
Replikace (CSPUG 19.4.2011)
Replikace (CSPUG 19.4.2011)Replikace (CSPUG 19.4.2011)
Replikace (CSPUG 19.4.2011)
 
PostgreSQL / Performance monitoring
PostgreSQL / Performance monitoringPostgreSQL / Performance monitoring
PostgreSQL / Performance monitoring
 

Recently uploaded

+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
Health
 
TECUNIQUE: Success Stories: IT Service provider
TECUNIQUE: Success Stories: IT Service providerTECUNIQUE: Success Stories: IT Service provider
TECUNIQUE: Success Stories: IT Service provider
mohitmore19
 

Recently uploaded (20)

8257 interfacing 2 in microprocessor for btech students
8257 interfacing 2 in microprocessor for btech students8257 interfacing 2 in microprocessor for btech students
8257 interfacing 2 in microprocessor for btech students
 
Optimizing AI for immediate response in Smart CCTV
Optimizing AI for immediate response in Smart CCTVOptimizing AI for immediate response in Smart CCTV
Optimizing AI for immediate response in Smart CCTV
 
The Ultimate Test Automation Guide_ Best Practices and Tips.pdf
The Ultimate Test Automation Guide_ Best Practices and Tips.pdfThe Ultimate Test Automation Guide_ Best Practices and Tips.pdf
The Ultimate Test Automation Guide_ Best Practices and Tips.pdf
 
Shapes for Sharing between Graph Data Spaces - and Epistemic Querying of RDF-...
Shapes for Sharing between Graph Data Spaces - and Epistemic Querying of RDF-...Shapes for Sharing between Graph Data Spaces - and Epistemic Querying of RDF-...
Shapes for Sharing between Graph Data Spaces - and Epistemic Querying of RDF-...
 
Software Quality Assurance Interview Questions
Software Quality Assurance Interview QuestionsSoftware Quality Assurance Interview Questions
Software Quality Assurance Interview Questions
 
Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...
Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...
Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...
 
Learn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdf
Learn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdfLearn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdf
Learn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdf
 
Define the academic and professional writing..pdf
Define the academic and professional writing..pdfDefine the academic and professional writing..pdf
Define the academic and professional writing..pdf
 
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
 
call girls in Vaishali (Ghaziabad) 🔝 >༒8448380779 🔝 genuine Escort Service 🔝✔️✔️
call girls in Vaishali (Ghaziabad) 🔝 >༒8448380779 🔝 genuine Escort Service 🔝✔️✔️call girls in Vaishali (Ghaziabad) 🔝 >༒8448380779 🔝 genuine Escort Service 🔝✔️✔️
call girls in Vaishali (Ghaziabad) 🔝 >༒8448380779 🔝 genuine Escort Service 🔝✔️✔️
 
How to Choose the Right Laravel Development Partner in New York City_compress...
How to Choose the Right Laravel Development Partner in New York City_compress...How to Choose the Right Laravel Development Partner in New York City_compress...
How to Choose the Right Laravel Development Partner in New York City_compress...
 
Right Money Management App For Your Financial Goals
Right Money Management App For Your Financial GoalsRight Money Management App For Your Financial Goals
Right Money Management App For Your Financial Goals
 
AI & Machine Learning Presentation Template
AI & Machine Learning Presentation TemplateAI & Machine Learning Presentation Template
AI & Machine Learning Presentation Template
 
VTU technical seminar 8Th Sem on Scikit-learn
VTU technical seminar 8Th Sem on Scikit-learnVTU technical seminar 8Th Sem on Scikit-learn
VTU technical seminar 8Th Sem on Scikit-learn
 
W01_panagenda_Navigating-the-Future-with-The-Hitchhikers-Guide-to-Notes-and-D...
W01_panagenda_Navigating-the-Future-with-The-Hitchhikers-Guide-to-Notes-and-D...W01_panagenda_Navigating-the-Future-with-The-Hitchhikers-Guide-to-Notes-and-D...
W01_panagenda_Navigating-the-Future-with-The-Hitchhikers-Guide-to-Notes-and-D...
 
Azure_Native_Qumulo_High_Performance_Compute_Benchmarks.pdf
Azure_Native_Qumulo_High_Performance_Compute_Benchmarks.pdfAzure_Native_Qumulo_High_Performance_Compute_Benchmarks.pdf
Azure_Native_Qumulo_High_Performance_Compute_Benchmarks.pdf
 
TECUNIQUE: Success Stories: IT Service provider
TECUNIQUE: Success Stories: IT Service providerTECUNIQUE: Success Stories: IT Service provider
TECUNIQUE: Success Stories: IT Service provider
 
5 Signs You Need a Fashion PLM Software.pdf
5 Signs You Need a Fashion PLM Software.pdf5 Signs You Need a Fashion PLM Software.pdf
5 Signs You Need a Fashion PLM Software.pdf
 
Exploring the Best Video Editing App.pdf
Exploring the Best Video Editing App.pdfExploring the Best Video Editing App.pdf
Exploring the Best Video Editing App.pdf
 
The Guide to Integrating Generative AI into Unified Continuous Testing Platfo...
The Guide to Integrating Generative AI into Unified Continuous Testing Platfo...The Guide to Integrating Generative AI into Unified Continuous Testing Platfo...
The Guide to Integrating Generative AI into Unified Continuous Testing Platfo...
 

PostgreSQL performance archaeology

  • 1. Performance Archaeology Tomáš Vondra, GoodData tomas.vondra@gooddata.com / tomas@pgaddict.com @fuzzycz, http://blog.pgaddict.com Photo by Jason Quinlan, Creative Commons CC-BY-NC-SA https://www.flickr.com/photos/catalhoyuk/9400568431
  • 2. How did the PostgreSQL performance evolve over the time? 7.4 released 2003, i.e. ~10 years
  • 3. (surprisingly) tricky question ● usually “partial” tests during development – compare two versions / commits – focused on a particular part of the code / feature ● more complex benchmarks compare two versions – difficult to “combine” (different hardware, ...) ● application performance (ultimate benchmark) – apps are subject to (regulard) hardware upgrades – amounts of data grow, applications evolve (new features)
  • 4. (somehow) unfair question ● we do develop within context of the current hardware – How much RAM did you use 10 years ago? – Who of you had SSD/NVRAM drives 10 years ago? – How common were machines with 8 cores? ● some differences are consequence of these changes ● a lot of stuff was improved outside PostgreSQL (ext3 -> ext4) Better performance on current hardware is always nice ;-)
  • 5. Let's do some benchmarks!
  • 6. short version: We're much faster and more scalable.
  • 7. If you're scared of numbers or charts, you should probably leave now.
  • 9. Benchmarks (overview) ● pgbench (TPC-B) – “transactional” benchmark – operations work with small row sets (access through PKs, ...) ● TPC-DS (replaces TPC-H) – “warehouse” benchmark – queries chewing large amounts of data (aggregations, joins, ROLLUP/CUBE, ...) ● fulltext benchmark (tsearch2) – primarily about improvements of GIN/GiST indexes – now just fulltext, there are many other uses for GIN/GiST (geo, ...)
  • 10. Hardware used HP DL380 G5 (2007-2009) ● 2x Xeon E5450 (each 4 cores @ 3GHz, 12MB cache) ● 16GB RAM (FB-DIMM DDR2 667 MHz), FSB 1333 MHz ● S3700 100GB (SSD) ● 6x10k RAID10 (SAS) @ P400 with 512MB write cache ● Scientific Linux 6.5 / kernel 2.6.32, ext4
  • 12. pgbench ● three dataset sizes – small (150 MB) – medium (~50% RAM) – large (~200% RAM) ● two modes – read-only and read-write ● client counts (1, 2, 4, ..., 32) ● 3 runs / 30 minute each (per combination)
  • 13. pgbench ● three dataset sizes – small (150 MB) <– locking issues, etc. – medium (~50% RAM) <– CPU bound – large (~200% RAM) <– I/O bound ● two modes – read-only and read-write ● client counts (1, 2, 4, ..., 32) ● 3 runs / 30 minute each (per combination)
  • 14. BEGIN; UPDATE accounts SET abalance = abalance + :delta WHERE aid = :aid; SELECT abalance FROM accounts WHERE aid = :aid; UPDATE tellers SET tbalance = tbalance + :delta WHERE tid = :tid; UPDATE branches SET bbalance = bbalance + :delta WHERE bid = :bid; INSERT INTO history (tid, bid, aid, delta, mtime) VALUES (:tid, :bid, :aid, :delta, CURRENT_TIMESTAMP); END;
  • 15. 0 5 10 15 20 25 30 35 12000 10000 8000 6000 4000 2000 0 pgbench / large read-only (on SSD) HP DL380 G5 (2x Xeon E5450, 16 GB DDR2 RAM), Intel S3700 100GB SSD number of clients 7.4 8.0 8.1 head transactions per second
  • 16. 0 5 10 15 20 25 30 35 80000 70000 60000 50000 40000 30000 20000 10000 0 pgbench / medium read-only (SSD) HP DL380 G5 (2x Xeon E5450, 16 GB DDR2 RAM), Intel S3700 100GB SSD number of clients 7.4 8.0 8.1 8.2 8.3 9.0 9.2 transactions per second
  • 17. 0 5 10 15 20 25 30 35 3000 2500 2000 1500 1000 500 0 pgbench / large read-write (SSD) HP DL380 G5 (2x Xeon E5450, 16 GB DDR2 RAM), Intel S3700 100GB SSD number of clients 7.4 8.1 8.3 9.1 9.2 transactions per second
  • 18. 0 5 10 15 20 25 30 35 6000 5000 4000 3000 2000 1000 0 pgbench / small read-write (SSD) HP DL380 G5 (2x Xeon E5450, 16 GB DDR2 RAM), Intel S3700 100GB SSD number of clients 7.4 8.0 8.1 8.2 8.3 8.4 9.0 9.2 transactions per second
  • 19. What about rotational drives? 6 x 10k SAS drives (RAID 10) P400 with 512MB write cache
  • 20. 0 5 10 15 20 25 30 800 700 600 500 400 300 200 100 0 pgbench / large read-write (SAS) HP DL380 G5 (2x Xeon E5450, 16 GB DDR2 RAM), 6x 10k SAS RAID10 number of clients 7.4 (sas) 8.4 (sas) 9.4 (sas) transaction per second
  • 21. What about a different machine?
  • 22. Alternative hardware Workstation i5 (2011-2013) ● 1x i5-2500k (4 cores @ 3.3 GHz, 6MB cache) ● 8GB RAM (DIMM DDR3 1333 MHz) ● S3700 100GB (SSD) ● Gentoo, kernel 3.12, ext4
  • 23. 0 5 10 15 20 25 30 35 40000 35000 30000 25000 20000 15000 10000 5000 0 pgbench / large read-only (Xeon vs. i5) 2x Xeon E5450 (3GHz), 16 GB DDR2 RAM, Intel S3700 100GB SSD i5-2500k (3.3 GHz), 8GB DDR3 RAM, Intel S3700 100GB SSD number of clients 7.4 (Xeon) 9.0 (Xeon) 7.4 (i5) 9.0 (i5) transactions per second
  • 24. 0 5 10 15 20 25 30 35 100000 80000 60000 40000 20000 0 pgbench / small read-only (Xeon vs. i5) 2x Xeon E5450 (3GHz), 16 GB DDR2 RAM, Intel S3700 100GB SSD i5-2500k (3.3 GHz), 8GB DDR3 RAM, Intel S3700 100GB SSD number of clients 7.4 (Xeon) 9.0 (Xeon) 9.4 (Xeon) 7.4 (i5) 9.0 (i5) 9.4 (i5) transactions per second
  • 25. 0 5 10 15 20 25 30 35 8000 7000 6000 5000 4000 3000 2000 1000 0 pgbench / small read-write (Xeon vs. i5) 2x Xeon E5450 (3GHz), 16 GB DDR2 RAM, Intel S3700 100GB SSD i5-2500k (3.3 GHz), 8GB DDR3 RAM, Intel S3700 100GB SSD number of clients 7.4 (Xeon) 9.4 (Xeon) 7.4 (i5) 9.4b1 (i5) transactions per second
  • 26. Legends say older version work better with lower memory limits (shared_buffers etc.)
  • 27. 0 5 10 15 20 25 30 35 40000 35000 30000 25000 20000 15000 10000 5000 0 pgbench / large read-only (i5-2500) different sizes of shared_buffers (128MB vs. 2GB) number of clients 7.4 7.4 (small) 8.0 8.0 (small) 9.4b1 transactions per second
  • 28. pgbench / summary ● much better ● improved locking – much better scalability to a lot of cores (>= 64) ● a lot of different optimizations – significant improvements even for small client counts ● lessons learned – CPU frequency is very poor measure – similarly for number of cores etc.
  • 29. TPC-DS “Decision Support” benchmark (aka “Data Warehouse” benchmark)
  • 30. TPC-DS ● analytics workloads / warehousing – queries processing large data sets (GROUP BY, JOIN) – non-uniform distribution (more realistic than TPC-H) ● 99 query templates defined (TPC-H just 22) – some broken (failing generator) – some unsupported (e.g. ROLLUP/CUBE) – 41 queries >= 7.4 – 61 queries >= 8.4 (CTE, Window functions) – no query rewrites
  • 31. TPC-DS ● 1GB and 16GB datasets (raw data) – 1GB insufficient for publication, 16GB nonstandard (according to TPC) ● interesting anyways ... – a lot of databases fit into 16GB – shows trends (applicable to large DBs) ● schema – pretty much default (standard compliance FTW!) – same for all versions (indexes K/join keys, a few more indexes) – definitely room for improvements (per version, ...)
  • 32. 8.0 8.1 8.2 8.3 8.4 9.0 9.1 9.2 9.3 9.4 head 7000 6000 5000 4000 3000 2000 1000 0 TPC DS / database size per 1GB raw data data indexes size [MB]
  • 33. 8.0 8.1 8.2 8.3 8.4 9.0 9.1 9.2 9.3 9.4 head 1400 1200 1000 800 600 400 200 0 TPC DS / load duration (1GB) copy indexes vacuum full vacuum freeze analyze duration [s]
  • 34. 8.0 8.1 8.2 8.3 8.4 9.0 9.1 9.2 9.3 9.4 head 1200 1000 800 600 400 200 0 TPC DS / load duration (1GB) copy indexes vacuum freeze analyze duration [s]
  • 35. 8.0 8.1 8.2 8.3 8.4 9.0 9.1 9.2 9.3 9.4 9000 8000 7000 6000 5000 4000 3000 2000 1000 0 TPC DS / load duration (16 GB) LOAD INDEXES VACUUM FREEZE ANALYZE duration [seconds]
  • 36. 8.0 8.1 8.2 8.3 8.4 9.0 9.1 9.2 9.3 9.4 head 350 300 250 200 150 100 50 0 TPC DS / duration (1GB) average duration of 41 queries seconds
  • 37. 8.0* 8.1 8.2 8.3 8.4 9.0 9.1 9.2 9.3 9.4 6000 5000 4000 3000 2000 1000 0 TPC DS / duration (16 GB) average duration of 41 queries version duration [seconds]
  • 38. TPC-DS / summary ● data load much faster – most of the time spent on indexes (parallelize, RAM) – ignoring VACUUM FULL (different implementation 9.0) – slightly less space occupied ● much faster queries – in total the speedup is ~6x – wider index usage, index only scans
  • 39. Fulltext Benchmark testing GIN and GiST indexes through fulltext search
  • 40. Fulltext benchmark ● searching through pgsql mailing list archives – ~1M messages, ~5GB of data ● ~33k real-world queries (from postgresql.org) – syntetic queries lead to about the same results SELECT id FROM messages WHERE body @@ ('high & performance')::tsquery ORDER BY ts_rank(body, ('high & performance')::tsquery) DESC LIMIT 100;
  • 41. 2000 1800 1600 1400 1200 1000 800 600 400 200 0 Fulltext benchmark / load COPY / with indexes and PL/pgSQL triggers COPY VACUUM FREEZE ANALYZE duration [sec]
  • 42. 8.0 8.1 8.2 8.3 8.4 9.0 9.1 9.2 9.3 9.4 6000 5000 4000 3000 2000 1000 0 Fulltext benchmark / GiST 33k queries from postgresql.org [TOP 100] total runtime [sec]
  • 43. 8.2 8.3 8.4 9.0 9.1 9.2 9.3 9.4 800 700 600 500 400 300 200 100 0 Fulltext benchmark / GIN 33k queries from postgresql.org [TOP 100] total runtime [sec]
  • 44. 8.0 8.1 8.2 8.3 8.4 9.0 9.1 9.2 9.3 9.4 6000 5000 4000 3000 2000 1000 0 Fulltext benchmark - GiST vs. GIN 33k queries from postgresql.org [TOP 100] GiST GIN total duration [sec]
  • 45. 9.4 durations, divided by 9.3 durations (e.g. 0.1 means 10x speedup) 1.8 1.6 1.4 1.2 1 0.8 0.6 0.4 0.2 0 Fulltext benchmark / 9.3 vs. 9.4 (GIN fastscan) 0.1 1 10 100 1000 duration on 9.3 [miliseconds, log scale] 9.4 duration (relative to 9.3)
  • 46. Fulltext / summary ● GIN fastscan – queries combining “frequent & rare” – 9.4 scans “frequent” posting lists first – exponential speedup for such queries – ... which is quite nice ;-) ● only ~5% queries slowed down – mostly queries below 1ms (measurement error)