SlideShare una empresa de Scribd logo
1 de 45
Descargar para leer sin conexión
@louisemeta
Postgres index types
(And where to find them)
Louise Grandjonc
DjangoCon 2019
@louisemeta
About me
Software engineer at Citus/Microsoft
Previously lead python developer
Postgres enthusiast
PostgresWomen co-founder
@louisemeta on twitter
www.louisemeta.com
louise.grandjonc@microsoft.com
@louisemeta !2
@louisemeta
What we’re going to talk about
1. What are indexes for?
2. Creating indexes
3. B-Tree
4. GIN
5. GiST
6. Brin
@louisemeta !3
@louisemeta
First things first: the crocodiles
• 250k crocodiles
• 100k birds
• 400K appointments
@louisemeta !4
@louisemeta
What are indexes for?
@louisemeta
Constraints
Some constraints transform into indexes.
- PRIMARY KEY
- UNIQUE
- EXCLUDE USING
"crocodile_pkey" PRIMARY KEY, btree (id)
"crocodile_email_uq" UNIQUE CONSTRAINT, btree (email)
Indexes:
"appointment_pkey" PRIMARY KEY, btree (id)
"appointment_crocodile_id_schedule_excl" EXCLUDE USING gist
(crocodile_id WITH =, schedule WITH &&)
In the crocodile table
In the appointment table
@louisemeta !6
@louisemeta
Query optimization
Often the main reason why we create indexes
Why do indexes make queries faster
In an index, tuples (value, pointer) are stored.
Instead of reading the entire table for a value, you just go to the index (kind of like in an
encyclopedia)
@louisemeta !7
@louisemeta
Creating indexes
@louisemeta
@louisemeta
Creating an index
@louisemeta !9
Let’s say we would like to do queries like:
Crocodile.objects.filter(number_of_teeth=10)
SELECT * FROM crocodile WHERE number_of_teeth = 10;
Time: 31ms
class Crocodile(models.Model):
…
number_of_teeth = models.IntegerField(db_index=True)
CREATE INDEX (optional index name) ON crocodile (number_of_teeth);
SELECT * FROM crocodile WHERE number_of_teeth = 10;
Time: 6ms
Raw SQL
New timing
@louisemeta
Creating a unique index
@louisemeta !10
We want to make sure that you can’t create two account with
the same email:
class Crocodile(models.Model):
…
email = models.EmailField(max_length=255, unique=True)
CREATE UNIQUE INDEX ON crocodile (email);
Raw SQL
Crocodile.objects.create(
email='louise@croco.com',
first_name='Louise',
last_name='grandjonc',
birthday='1991-12-21',
number_of_teeth=32)
DETAIL: Key (email)=(louise@croco.com)
already exists.
Insert result if duplicated row
@louisemeta
Creating a partial index
95% of the appointments in our database have the field done=True.
Appointment.objects.filter(emergency_level__gt=8, done=False)
CREATE INDEX ON appointment (emergency_level);
@louisemeta !11
class Appointment(models.Model):
…
class Meta:
indexes = [models.Index(fields=['emergency_level'],
name='apptmt_emergency_level_idx',
condition=Q(done=False))]
In Django >= 2.2
@louisemeta
Creating a partial index
@louisemeta !12
CREATE INDEX ON crocodile (emergency_level) WHERE done is False;
Raw SQL
Size on the index: 352 kB
Time of the query: 3.639 ms
Size on the index:13MB
Time of the query: 29.106 ms
Old index without condition New index with condition
@louisemeta
Creating a partial unique index
@louisemeta !13
Want to add a UNIQUE index but have duplicates due to history or
soft delete?
class Crocodile(models.Model):
...
class Meta:
constraints = [
models.UniqueConstraint(
fields=['email'],
name='croco_email_uniq’,
condition=Q(created_at__gte='2019-09-01'))]
In Django >= 2.2
@louisemeta
Creating a partial unique index
@louisemeta !14
CREATE UNIQUE INDEX ON crocodile (email) WHERE created_at >
‘2019-01-01’;
Size of previous index: 6648 kB
Size of the new index: 112 kB
Why use a partial unique index?
- Save disk space with smaller index
- Faster inserts because the index tree is shorter to explore (especially
when you have a huge volume of old data)
Raw SQL
@louisemeta
Creating a multi-column index
@louisemeta !15
We have a job running regularly to list a bird’s emergencies, it runs
the following query:
class Appointment(models.Model):
…
class Meta:
indexes = [models.Index(fields=['plover_bird', ‘emergency_level'],
name='apptmt_plover_emergency_idx')]
Appointment.objects.filter(emergency_level__gte=9,
plover_bird=plover_bird)
CREATE INDEX ON appointment (plover_bird_id, emergency_level);
Time: 41.560 ms
Raw SQL
Time after: 0.606 ms
@louisemeta
Ordering the columns in a multi-column index
@louisemeta !16
Two things to consider:
- Re-using the index: the rightmost columns can be re-used for other queries.
The first column will be ordered, so the index can be used by this query:
Appointment.objects.filter(plover_bird=plover_bird)
30 8
55 10
96 7
31 5
31 10
55 10
56 10
57 3
31 6 RowID
31 10 RowID
31 10 RowID
32 1 RowID
32 6 RowID
…
55 10 RowID
plover_bird_id
em
ergency_level
@louisemeta
Ordering the columns in a multi-column index
@louisemeta !17
- The most filtering columns should come first
croco_talk=# SELECT COUNT(*) FROM appointment WHERE emergency_level
>= 9;
count
-------
75982
(1 row)
croco_talk=# SELECT COUNT(*) FROM appointment WHERE plover_bird_id =
22551;
count
-------
5
(1 row)
In this case, with plover_bird_id as the first column, it will first filter out and
the second filter will be applied on only 5 rows.
@louisemeta
B-Trees
@louisemeta
@louisemeta
B-Trees internal data structure
@louisemeta !19
Root
High Key: None
begin 16 31
Parent
High Key: 16
begin 12 14
Parent
High Key: 31
16 20
Parent
High Key: None
31 33
Leave
High Key: 12
Value: 1

Pointer:
croco 10
Value: 1

Pointer:
croco 12
Value : 2

Pointer:
croco 23
… Value 10

Pointer:
croco 1
Value: 11

Pointer:
croco 2
Leave
High Key: 14
Value:
12

Pointer:
croco 17
Value:
13

Pointer:
croco 3
Value :
13

Pointer:
croco 4
… Value 13

Pointer:
croco 27
…
Leave
High Key: None
Value: 33

Pointer:
croco 5
Value: 33

Pointer:
croco 6
Value : 33

Pointer:
croco 123
… Value: 38

Pointer:
croco 26
- A BTree in a balanced tree
- All the leaves are at equal distance from the root.
- A parent node can have multiple children minimizing the tree’s depth
@louisemeta
B-Trees internal data structure - 2
Pages
The root, the parents, and the leaves are all pages with the same structure.
Pages have:
- A block number (pointer)
- A high key (defines the highest value found in a page)
- Items
@louisemeta !20
@louisemeta
B-Trees internal data structure - 4
Pages high key
- Any item in the page will have a value lower or equal to the high key
And in page 575, there is no high key as it’s the
rightmost page.
In page 3, I will find crocodiles with 16 or less teeth
In page 289, with 31 and less
@louisemeta !21
@louisemeta
B-Trees internal data structure - 5
Items
An item contains:
- A value (of the indexed row in the leaves, of the first row in the parents)
- Pointer (to the row in the leaves, to the child page in the parents)
@louisemeta !22
@louisemeta
To sum it up
@louisemeta !23
Root
High Key: None
begin 16 31
Parent
High Key: 16
begin 12 14
Parent
High Key: 31
16 20
Parent
High Key: None
31 33
Leave
High Key: 12
Value: 1

Pointer:
croco 10
Value: 1

Pointer:
croco 12
Value : 2

Pointer:
croco 23
… Value 10

Pointer:
croco 1
Value: 11

Pointer:
croco 2
Leave
High Key: 14
Value:
12

Pointer:
croco 17
Value:
13

Pointer:
croco 3
Value :
13

Pointer:
croco 4
… Value 13

Pointer:
croco 27
…
Leave
High Key: None
Value: 33

Pointer:
croco 5
Value: 33

Pointer:
croco 6
Value : 33

Pointer:
croco 123
… Value: 38

Pointer:
croco 26
- A Btree is a balanced tree
- The values indexed are the values of the rows
- Data is stored in pages
- Pages have a high key defining the biggest value in the page
- Pages have items pointing to an other page or the row.

@louisemeta
What are BTree good for?
@louisemeta !24
Root
High Key: None
begin 16 31
Parent
High Key: 16
begin 12 14
Parent
High Key: 31
16 20
Parent
High Key: None
31 33
Leave
High Key: 12
Value: 1

Pointer:
croco 10
Value: 1

Pointer:
croco 12
Value : 2

Pointer:
croco 23
… Value 10

Pointer:
croco 1
Value: 11

Pointer:
croco 2
Leave
High Key: 14
Value:
12

Pointer:
croco 17
Value:
13

Pointer:
croco 3
Value :
13

Pointer:
croco 4
… Value 13

Pointer:
croco 27
…
Leave
High Key: None
Value: 33

Pointer:
croco 5
Value: 33

Pointer:
croco 6
Value : 33

Pointer:
croco 123
… Value: 38

Pointer:
croco 26
BTrees are good for the following operations: =, >, <, >=, <=
Why?
Because the value indexed is the value of the column(s) so we can
easily perform binary search in the BTree
@louisemeta
GIN
@louisemeta
GIN
- Used to index arrays, jsonb, and tsvector (for fulltext search) columns.
- Efficient for <@, &&, @@@ operators
New column healed_teeth:
 croco=# SELECT email, number_of_teeth, healed_teeth FROM crocodile WHERE id =1;
-[ RECORD 1 ]---+--------------------------------------------------------
email | louise.grandjonc1@croco.com
number_of_teeth | 58
healed_teeth | {16,11,55,27,22,41,38,2,5,40,52,57,28,50,10,15,1,12,46}
!26
@louisemeta
Creating a GIN index
Here is how to create the GIN index for this column
CREATE INDEX ON crocodile USING GIN(healed_teeth);
!27
from django.contrib.postgres.indexes import GinIndex
class Crocodile(models.Model):
...
class Meta:
indexes = [GinIndex(fields=['healed_teeth'])]
Raw SQL
@louisemeta
GIN
How is it different from a BTree?
- In a GIN index, the array is split and each value is an entry
- The values are unique
- As the value is unique, in the leaves, we keep a list of pointers to the rows
!28
Root
Value: <begin>
Value: 10
Value: 20
…
Parent
Value: 1
Value: 4
Value: 6
…
Parent
Value: 10
Value: 15
Value: 17
…
Parent
Value: 20
Value: 24
Value: 26
…
Leaf
Pointers: {(269, 49),
(296, 51), (296, 54),
(296, 57), …}
Pointers: { (306, 33),
(306, 35), (306,36), …}
…
Leaf
Pointer to posting tree
…
Page
Page
Page
Root
Posting tree
@louisemeta
GIN
How is it different from a BTree?
Bitmap Heap Scan on crocodile
(cost=516.59..6613.42 rows=54786 width=29)
(actual time=15.960..38.197 rows=73275 loops=1)
Recheck Cond: ('{1,2}'::integer[] <@ healed_teeth)
Heap Blocks: exact=4218
-> Bitmap Index Scan on crocodile_healed_teeth_idx
(cost=0.00..502.90 rows=54786 width=0)
(actual time=15.302..15.302 rows=73275 loops=1)
Index Cond: ('{1,2}'::integer[] <@ healed_teeth)
Planning time: 0.124 ms
Execution time: 41.018 ms
(7 rows)
Seq Scan on crocodile (cost=…)
Filter: ('{1,2}'::integer[] <@ healed_teeth)
Rows Removed by Filter: 250728
Planning time: 0.157 ms
Execution time: 161.716 ms
(5 rows)
!29
@louisemeta
To sum it up
@louisemeta !30
- A GIN index is a balanced tree
- Each value in the tree is unique
- The row value is split and each value is an entry
- Efficient for <@, &&, @@@ operators
@louisemeta
GIST
@louisemeta
GiST - keys
Differences with a BTree index
- Data isn’t ordered
- The key ranges can overlap
Which means that a same value can be inserted in different pages
!32
To be more readable, the following example here is for a Integer Range type :)
Root
Page block number: 0

Page level: 0
Value: [3, 5]
Value: [0, 2]
Value: [4, 8]
Value: [7, 9]
Parent
Page block number: 4699

Page level: 1
Parent
Page block number: 1610

Page level: 1
Parent
Page block number: 813

Page level: 1
Parent
Page block number: 6249

Page level: 1
@louisemeta
Creating a GiST index
!33
Here is how to create the GiST index for this column
CREATE INDEX ON appointment USING GIST(schedule);
from django.contrib.postgres.indexes import GistIndex
class Appointment(models.Model):
...
class Meta:
indexes = [GistIndex(fields=[‘schedule'])]
Raw SQL
@louisemeta
Why use GiST
- Useful for overlapping (geometries, array, range etc.)
- Especially useful when using postgis
- Nearest neighbor
- Can be used for full text search (tsvector, tsquery)
!34
@louisemeta
GiST or GIN for fulltext search
movies=# CREATE INDEX ON film USING GIN(fulltext) with (fastupdate=off);
CREATE INDEX
Time: 8.083 ms
movies=# INSERT INTO film (title, description, language_id) VALUES ('Nightmare at the
dentist', 'A crocodile calls his dentist on halloween and ends up toothless and very
sad, warning: not for kids, or teeth-sensitive crocodiles', 1);
INSERT 0 1
Time: 3.057 ms
movies=# INSERT INTO film (title, description, language_id) VALUES ('Nightmare at the
dentist', 'The terrible adventure of a crocodile who never goes to the dentist', 1);
INSERT 0 1
Time: 1.323 ms
- Maintaining a GIN index is slower than GiST
!35
@louisemeta
GiST or GIN for fulltext search
- Lookups are faster with GIN
movies=# SELECT COUNT(*) FROM film WHERE fulltext @@ to_tsquery('crocodile');
count
-------
106
(1 row)
Time: 1.275 ms
movies=# SELECT COUNT(*) FROM film WHERE fulltext @@ to_tsquery('crocodile');
count
-------
106
(1 row)
Time: 0.467 ms
!36
@louisemeta
GiST or GIN for fulltext search
- GIN indexes are larger than GiST
movies=# di+ film_fulltext_idx
List of relations
Schema | Name | Type | Owner | Table | Size | Description
--------+-------------------+-------+----------+-------+-------+-------------
public | film_fulltext_idx | index | postgres | film | 88 kB |
(1 row)
movies=# di+ film_fulltext_gin_idx
List of relations
Schema | Name | Type | Owner | Table | Size | Description
--------+-----------------------+-------+----------+-------+--------+-------------
public | film_fulltext_gin_idx | index | postgres | film | 112 kB |
(1 row)
!37
@louisemeta
BRIN
@louisemeta
BRIN
Internal data structure
- Block Range Index
- Not a balanced tree
- Not even a tree
- Block range: group of pages physically adjacent
- For each block range: the range of values is stored
- BRIN indexes are very small
- Fast scanning on large tables
!39
@louisemeta
BRIN
Internal data structure
SELECT * FROM brin_page_items(get_raw_page('appointment_created_at_idx', 2), 'appointment_created_at_idx');
itemoffset | blknum | attnum | allnulls | hasnulls | placeholder | value
------------+--------+--------+----------+----------+-------------+---------------------------------------------------
1 | 0 | 1 | f | f | f | {2008-03-01 00:00:00-08 .. 2009-07-07 07:30:00-07}
2 | 128 | 1 | f | f | f | {2009-07-07 08:00:00-07 .. 2010-11-12 15:30:00-08}
3 | 256 | 1 | f | f | f | {2010-11-12 16:00:00-08 .. 2012-03-19 23:30:00-07}
4 | 384 | 1 | f | f | f | {2012-03-20 00:00:00-07 .. 2013-07-26 07:30:00-07}
5 | 512 | 1 | f | f | f | {2013-07-26 08:00:00-07 .. 2014-12-01 15:30:00-08}
SELECT id, created_at FROM appointment WHERE ctid='(0, 1)'::tid;
id | created_at
--------+------------------------
101375 | 2008-03-01 00:00:00-08
(1 row)
!40
@louisemeta
BRIN
Internal data structure
SELECT * FROM brin_page_items(get_raw_page('crocodile_birthday_idx', 2),
'crocodile_birthday_idx');
itemoffset | blknum | attnum | allnulls | hasnulls | placeholder | value
------------+--------+--------+----------+----------+-------------+----------------------------
1 | 0 | 1 | f | f | f | {1948-09-05 .. 2018-09-04}
2 | 128 | 1 | f | f | f | {1948-09-07 .. 2018-09-03}
3 | 256 | 1 | f | f | f | {1948-09-05 .. 2018-09-03}
4 | 384 | 1 | f | f | f | {1948-09-05 .. 2018-09-04}
5 | 512 | 1 | f | f | f | {1948-09-05 .. 2018-09-02}
6 | 640 | 1 | f | f | f | {1948-09-09 .. 2018-09-04}
…
(14 rows)
In this case, the values in birthday has no correlation with the physical
location, the index would not speed up the search as all pages would have
to be visited.
BRIN is interesting for data where the value is correlated with the
physical location.
!41
@louisemeta
BRIN
Warning on DELETE and INSERT
SELECT * FROM brin_page_items(get_raw_page('appointment_created_at_idx', 2), 'appointment_created_at_idx');
itemoffset | blknum | attnum | allnulls | hasnulls | placeholder | value
------------+--------+--------+----------+----------+-------------+---------------------------------------------------
1 | 0 | 1 | f | f | f | {2008-03-01 00:00:00-08 .. 2018-07-01 07:30:00-07}
2 | 128 | 1 | f | f | f | {2009-07-07 08:00:00-07 .. 2018-07-01 23:30:00-07}
3 | 256 | 1 | f | f | f | {2010-11-12 16:00:00-08 .. 2012-03-19 23:30:00-07}
4 | 384 | 1 | f | f | f | {2012-03-20 00:00:00-07 .. 2018-07-06 23:30:00-07}
DELETE FROM appointment WHERE created_at >= '2009-07-07' AND created_at < ‘2009-07-08';
DELETE FROM appointment WHERE created_at >= '2012-03-20' AND created_at < ‘2012-03-25';
Deleted and then vacuum on the appointment table
New rows are inserted in the free space after VACUUM
BRIN index has some ranges with big data ranges.
Search will visit a lot of pages.
!42
@louisemeta
Creating a BRIN index
!43
Here is how to create the BRIN index for this column
CREATE INDEX ON crocodile USING BRIN(created_at);
from django.contrib.postgres.indexes import BrinIndeex
class Crocodile(models.Model):
...
class Meta:
indexes = [BrinIndex(fields=['created_at'])]
Raw SQL
@louisemeta
Conclusion
- B-Tree
- Great for <, >, =, >=, <=
- GIN
- Fulltext search, jsonb, arrays
- Inserts can be slow because of unicity of the
keys
- GiST
- Great for overlapping
- Using key class functions
- Can be implemented for any data type
- BRIN
- Great for huge table with correlation
between value and physical location
- <, >, =, >=, <=
!44
@louisemeta
Questions
Thanks for your attention
Go read the articles www.louisemeta.com
Now only the ones on BTrees and GIN are
published, but I’ll announce the rest on
twitter @louisemeta
Crocodiles by https://www.instagram.com/zimmoriarty/?hl=en
!45

Más contenido relacionado

La actualidad más candente

Key-Value Stores: a practical overview
Key-Value Stores: a practical overviewKey-Value Stores: a practical overview
Key-Value Stores: a practical overview
Marc Seeger
 
The InnoDB Storage Engine for MySQL
The InnoDB Storage Engine for MySQLThe InnoDB Storage Engine for MySQL
The InnoDB Storage Engine for MySQL
Morgan Tocker
 

La actualidad más candente (20)

Stylish XSS
Stylish XSSStylish XSS
Stylish XSS
 
Top 10 Mistakes When Migrating From Oracle to PostgreSQL
Top 10 Mistakes When Migrating From Oracle to PostgreSQLTop 10 Mistakes When Migrating From Oracle to PostgreSQL
Top 10 Mistakes When Migrating From Oracle to PostgreSQL
 
ClickHouse tips and tricks. Webinar slides. By Robert Hodges, Altinity CEO
ClickHouse tips and tricks. Webinar slides. By Robert Hodges, Altinity CEOClickHouse tips and tricks. Webinar slides. By Robert Hodges, Altinity CEO
ClickHouse tips and tricks. Webinar slides. By Robert Hodges, Altinity CEO
 
High Performance, High Reliability Data Loading on ClickHouse
High Performance, High Reliability Data Loading on ClickHouseHigh Performance, High Reliability Data Loading on ClickHouse
High Performance, High Reliability Data Loading on ClickHouse
 
A Day in the Life of a ClickHouse Query Webinar Slides
A Day in the Life of a ClickHouse Query Webinar Slides A Day in the Life of a ClickHouse Query Webinar Slides
A Day in the Life of a ClickHouse Query Webinar Slides
 
All about Zookeeper and ClickHouse Keeper.pdf
All about Zookeeper and ClickHouse Keeper.pdfAll about Zookeeper and ClickHouse Keeper.pdf
All about Zookeeper and ClickHouse Keeper.pdf
 
Better than you think: Handling JSON data in ClickHouse
Better than you think: Handling JSON data in ClickHouseBetter than you think: Handling JSON data in ClickHouse
Better than you think: Handling JSON data in ClickHouse
 
Tricks every ClickHouse designer should know, by Robert Hodges, Altinity CEO
Tricks every ClickHouse designer should know, by Robert Hodges, Altinity CEOTricks every ClickHouse designer should know, by Robert Hodges, Altinity CEO
Tricks every ClickHouse designer should know, by Robert Hodges, Altinity CEO
 
PostgreSQL Database Slides
PostgreSQL Database SlidesPostgreSQL Database Slides
PostgreSQL Database Slides
 
Optimizing Autovacuum: PostgreSQL's vacuum cleaner
Optimizing Autovacuum: PostgreSQL's vacuum cleanerOptimizing Autovacuum: PostgreSQL's vacuum cleaner
Optimizing Autovacuum: PostgreSQL's vacuum cleaner
 
Perl 5.10 for People Who Aren't Totally Insane
Perl 5.10 for People Who Aren't Totally InsanePerl 5.10 for People Who Aren't Totally Insane
Perl 5.10 for People Who Aren't Totally Insane
 
Percona Xtrabackup - Highly Efficient Backups
Percona Xtrabackup - Highly Efficient BackupsPercona Xtrabackup - Highly Efficient Backups
Percona Xtrabackup - Highly Efficient Backups
 
Flexible Indexing with Postgres
Flexible Indexing with PostgresFlexible Indexing with Postgres
Flexible Indexing with Postgres
 
Key-Value Stores: a practical overview
Key-Value Stores: a practical overviewKey-Value Stores: a practical overview
Key-Value Stores: a practical overview
 
InfluxDB IOx Tech Talks: Query Engine Design and the Rust-Based DataFusion in...
InfluxDB IOx Tech Talks: Query Engine Design and the Rust-Based DataFusion in...InfluxDB IOx Tech Talks: Query Engine Design and the Rust-Based DataFusion in...
InfluxDB IOx Tech Talks: Query Engine Design and the Rust-Based DataFusion in...
 
PostgreSQL: Advanced indexing
PostgreSQL: Advanced indexingPostgreSQL: Advanced indexing
PostgreSQL: Advanced indexing
 
The InnoDB Storage Engine for MySQL
The InnoDB Storage Engine for MySQLThe InnoDB Storage Engine for MySQL
The InnoDB Storage Engine for MySQL
 
Transparent sharding with Spider: what's new and getting started
Transparent sharding with Spider: what's new and getting startedTransparent sharding with Spider: what's new and getting started
Transparent sharding with Spider: what's new and getting started
 
Why is My Stream Processing Job Slow? with Xavier Leaute
Why is My Stream Processing Job Slow? with Xavier LeauteWhy is My Stream Processing Job Slow? with Xavier Leaute
Why is My Stream Processing Job Slow? with Xavier Leaute
 
MMUG18 - MySQL Failover and Orchestrator
MMUG18 - MySQL Failover and OrchestratorMMUG18 - MySQL Failover and Orchestrator
MMUG18 - MySQL Failover and Orchestrator
 

Similar a Postgres index types

Efficient Data Storage for Analytics with Apache Parquet 2.0
Efficient Data Storage for Analytics with Apache Parquet 2.0Efficient Data Storage for Analytics with Apache Parquet 2.0
Efficient Data Storage for Analytics with Apache Parquet 2.0
Cloudera, Inc.
 

Similar a Postgres index types (20)

A story on Postgres index types | PostgresLondon 2019 | Louise Grandjonc
A story on Postgres index types | PostgresLondon 2019 | Louise GrandjoncA story on Postgres index types | PostgresLondon 2019 | Louise Grandjonc
A story on Postgres index types | PostgresLondon 2019 | Louise Grandjonc
 
Croco talk pgconfeu
Croco talk pgconfeuCroco talk pgconfeu
Croco talk pgconfeu
 
Indexes in Postgres | PostgreSQL Conference Europe 2018 | Louise Grandjonc
Indexes in Postgres | PostgreSQL Conference Europe 2018 | Louise GrandjoncIndexes in Postgres | PostgreSQL Conference Europe 2018 | Louise Grandjonc
Indexes in Postgres | PostgreSQL Conference Europe 2018 | Louise Grandjonc
 
Угадываем пароль за минуту
Угадываем пароль за минутуУгадываем пароль за минуту
Угадываем пароль за минуту
 
Beyond php - it's not (just) about the code
Beyond php - it's not (just) about the codeBeyond php - it's not (just) about the code
Beyond php - it's not (just) about the code
 
Cassandra data structures and algorithms
Cassandra data structures and algorithmsCassandra data structures and algorithms
Cassandra data structures and algorithms
 
Beyond php - it's not (just) about the code
Beyond php - it's not (just) about the codeBeyond php - it's not (just) about the code
Beyond php - it's not (just) about the code
 
DutchMLSchool. Automating Decision Making
DutchMLSchool. Automating Decision MakingDutchMLSchool. Automating Decision Making
DutchMLSchool. Automating Decision Making
 
Efficient Data Storage for Analytics with Apache Parquet 2.0
Efficient Data Storage for Analytics with Apache Parquet 2.0Efficient Data Storage for Analytics with Apache Parquet 2.0
Efficient Data Storage for Analytics with Apache Parquet 2.0
 
Efficient Data Storage for Analytics with Parquet 2.0 - Hadoop Summit 2014
Efficient Data Storage for Analytics with Parquet 2.0 - Hadoop Summit 2014Efficient Data Storage for Analytics with Parquet 2.0 - Hadoop Summit 2014
Efficient Data Storage for Analytics with Parquet 2.0 - Hadoop Summit 2014
 
Beyond PHP - It's not (just) about the code
Beyond PHP - It's not (just) about the codeBeyond PHP - It's not (just) about the code
Beyond PHP - It's not (just) about the code
 
Beyond php - it's not (just) about the code
Beyond php - it's not (just) about the codeBeyond php - it's not (just) about the code
Beyond php - it's not (just) about the code
 
Python slide
Python slidePython slide
Python slide
 
Apache Cassandra - Data modelling
Apache Cassandra - Data modellingApache Cassandra - Data modelling
Apache Cassandra - Data modelling
 
CCM AlchemyAPI and Real-time Aggregation
CCM AlchemyAPI and Real-time AggregationCCM AlchemyAPI and Real-time Aggregation
CCM AlchemyAPI and Real-time Aggregation
 
DOAN DuyHai – Cassandra: real world best use-cases and worst anti-patterns - ...
DOAN DuyHai – Cassandra: real world best use-cases and worst anti-patterns - ...DOAN DuyHai – Cassandra: real world best use-cases and worst anti-patterns - ...
DOAN DuyHai – Cassandra: real world best use-cases and worst anti-patterns - ...
 
Beyond PHP - it's not (just) about the code
Beyond PHP - it's not (just) about the codeBeyond PHP - it's not (just) about the code
Beyond PHP - it's not (just) about the code
 
PostgreSQL Open SV 2018
PostgreSQL Open SV 2018PostgreSQL Open SV 2018
PostgreSQL Open SV 2018
 
He stopped using for/while loops, you won't believe what happened next!
He stopped using for/while loops, you won't believe what happened next!He stopped using for/while loops, you won't believe what happened next!
He stopped using for/while loops, you won't believe what happened next!
 
DEF CON 27 -OMER GULL - select code execution from using sq lite
DEF CON 27 -OMER GULL - select code execution from using sq liteDEF CON 27 -OMER GULL - select code execution from using sq lite
DEF CON 27 -OMER GULL - select code execution from using sq lite
 

Más de Louise Grandjonc (6)

Amazing SQL your django ORM can or can't do
Amazing SQL your django ORM can or can't doAmazing SQL your django ORM can or can't do
Amazing SQL your django ORM can or can't do
 
Pg exercices
Pg exercicesPg exercices
Pg exercices
 
Becoming a better developer with EXPLAIN
Becoming a better developer with EXPLAINBecoming a better developer with EXPLAIN
Becoming a better developer with EXPLAIN
 
The amazing world behind your ORM
The amazing world behind your ORMThe amazing world behind your ORM
The amazing world behind your ORM
 
Conf orm - explain
Conf orm - explainConf orm - explain
Conf orm - explain
 
Meetup pg recherche fulltext ES -> PG
Meetup pg recherche fulltext ES -> PGMeetup pg recherche fulltext ES -> PG
Meetup pg recherche fulltext ES -> PG
 

Último

Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
panagenda
 
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Victor Rentea
 

Último (20)

"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ..."I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
 
presentation ICT roal in 21st century education
presentation ICT roal in 21st century educationpresentation ICT roal in 21st century education
presentation ICT roal in 21st century education
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
WSO2's API Vision: Unifying Control, Empowering Developers
WSO2's API Vision: Unifying Control, Empowering DevelopersWSO2's API Vision: Unifying Control, Empowering Developers
WSO2's API Vision: Unifying Control, Empowering Developers
 
Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
 
Exploring Multimodal Embeddings with Milvus
Exploring Multimodal Embeddings with MilvusExploring Multimodal Embeddings with Milvus
Exploring Multimodal Embeddings with Milvus
 
MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024
 
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdfRising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
 
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
 
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
 
CNIC Information System with Pakdata Cf In Pakistan
CNIC Information System with Pakdata Cf In PakistanCNIC Information System with Pakdata Cf In Pakistan
CNIC Information System with Pakdata Cf In Pakistan
 
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
 
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
 
Mcleodganj Call Girls 🥰 8617370543 Service Offer VIP Hot Model
Mcleodganj Call Girls 🥰 8617370543 Service Offer VIP Hot ModelMcleodganj Call Girls 🥰 8617370543 Service Offer VIP Hot Model
Mcleodganj Call Girls 🥰 8617370543 Service Offer VIP Hot Model
 
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWEREMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
 
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
 
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, AdobeApidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
 
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
 
MS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectorsMS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectors
 
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
 

Postgres index types

  • 1. @louisemeta Postgres index types (And where to find them) Louise Grandjonc DjangoCon 2019
  • 2. @louisemeta About me Software engineer at Citus/Microsoft Previously lead python developer Postgres enthusiast PostgresWomen co-founder @louisemeta on twitter www.louisemeta.com louise.grandjonc@microsoft.com @louisemeta !2
  • 3. @louisemeta What we’re going to talk about 1. What are indexes for? 2. Creating indexes 3. B-Tree 4. GIN 5. GiST 6. Brin @louisemeta !3
  • 4. @louisemeta First things first: the crocodiles • 250k crocodiles • 100k birds • 400K appointments @louisemeta !4
  • 6. @louisemeta Constraints Some constraints transform into indexes. - PRIMARY KEY - UNIQUE - EXCLUDE USING "crocodile_pkey" PRIMARY KEY, btree (id) "crocodile_email_uq" UNIQUE CONSTRAINT, btree (email) Indexes: "appointment_pkey" PRIMARY KEY, btree (id) "appointment_crocodile_id_schedule_excl" EXCLUDE USING gist (crocodile_id WITH =, schedule WITH &&) In the crocodile table In the appointment table @louisemeta !6
  • 7. @louisemeta Query optimization Often the main reason why we create indexes Why do indexes make queries faster In an index, tuples (value, pointer) are stored. Instead of reading the entire table for a value, you just go to the index (kind of like in an encyclopedia) @louisemeta !7
  • 9. @louisemeta Creating an index @louisemeta !9 Let’s say we would like to do queries like: Crocodile.objects.filter(number_of_teeth=10) SELECT * FROM crocodile WHERE number_of_teeth = 10; Time: 31ms class Crocodile(models.Model): … number_of_teeth = models.IntegerField(db_index=True) CREATE INDEX (optional index name) ON crocodile (number_of_teeth); SELECT * FROM crocodile WHERE number_of_teeth = 10; Time: 6ms Raw SQL New timing
  • 10. @louisemeta Creating a unique index @louisemeta !10 We want to make sure that you can’t create two account with the same email: class Crocodile(models.Model): … email = models.EmailField(max_length=255, unique=True) CREATE UNIQUE INDEX ON crocodile (email); Raw SQL Crocodile.objects.create( email='louise@croco.com', first_name='Louise', last_name='grandjonc', birthday='1991-12-21', number_of_teeth=32) DETAIL: Key (email)=(louise@croco.com) already exists. Insert result if duplicated row
  • 11. @louisemeta Creating a partial index 95% of the appointments in our database have the field done=True. Appointment.objects.filter(emergency_level__gt=8, done=False) CREATE INDEX ON appointment (emergency_level); @louisemeta !11 class Appointment(models.Model): … class Meta: indexes = [models.Index(fields=['emergency_level'], name='apptmt_emergency_level_idx', condition=Q(done=False))] In Django >= 2.2
  • 12. @louisemeta Creating a partial index @louisemeta !12 CREATE INDEX ON crocodile (emergency_level) WHERE done is False; Raw SQL Size on the index: 352 kB Time of the query: 3.639 ms Size on the index:13MB Time of the query: 29.106 ms Old index without condition New index with condition
  • 13. @louisemeta Creating a partial unique index @louisemeta !13 Want to add a UNIQUE index but have duplicates due to history or soft delete? class Crocodile(models.Model): ... class Meta: constraints = [ models.UniqueConstraint( fields=['email'], name='croco_email_uniq’, condition=Q(created_at__gte='2019-09-01'))] In Django >= 2.2
  • 14. @louisemeta Creating a partial unique index @louisemeta !14 CREATE UNIQUE INDEX ON crocodile (email) WHERE created_at > ‘2019-01-01’; Size of previous index: 6648 kB Size of the new index: 112 kB Why use a partial unique index? - Save disk space with smaller index - Faster inserts because the index tree is shorter to explore (especially when you have a huge volume of old data) Raw SQL
  • 15. @louisemeta Creating a multi-column index @louisemeta !15 We have a job running regularly to list a bird’s emergencies, it runs the following query: class Appointment(models.Model): … class Meta: indexes = [models.Index(fields=['plover_bird', ‘emergency_level'], name='apptmt_plover_emergency_idx')] Appointment.objects.filter(emergency_level__gte=9, plover_bird=plover_bird) CREATE INDEX ON appointment (plover_bird_id, emergency_level); Time: 41.560 ms Raw SQL Time after: 0.606 ms
  • 16. @louisemeta Ordering the columns in a multi-column index @louisemeta !16 Two things to consider: - Re-using the index: the rightmost columns can be re-used for other queries. The first column will be ordered, so the index can be used by this query: Appointment.objects.filter(plover_bird=plover_bird) 30 8 55 10 96 7 31 5 31 10 55 10 56 10 57 3 31 6 RowID 31 10 RowID 31 10 RowID 32 1 RowID 32 6 RowID … 55 10 RowID plover_bird_id em ergency_level
  • 17. @louisemeta Ordering the columns in a multi-column index @louisemeta !17 - The most filtering columns should come first croco_talk=# SELECT COUNT(*) FROM appointment WHERE emergency_level >= 9; count ------- 75982 (1 row) croco_talk=# SELECT COUNT(*) FROM appointment WHERE plover_bird_id = 22551; count ------- 5 (1 row) In this case, with plover_bird_id as the first column, it will first filter out and the second filter will be applied on only 5 rows.
  • 19. @louisemeta B-Trees internal data structure @louisemeta !19 Root High Key: None begin 16 31 Parent High Key: 16 begin 12 14 Parent High Key: 31 16 20 Parent High Key: None 31 33 Leave High Key: 12 Value: 1 Pointer: croco 10 Value: 1 Pointer: croco 12 Value : 2 Pointer: croco 23 … Value 10 Pointer: croco 1 Value: 11 Pointer: croco 2 Leave High Key: 14 Value: 12 Pointer: croco 17 Value: 13 Pointer: croco 3 Value : 13 Pointer: croco 4 … Value 13 Pointer: croco 27 … Leave High Key: None Value: 33 Pointer: croco 5 Value: 33 Pointer: croco 6 Value : 33 Pointer: croco 123 … Value: 38 Pointer: croco 26 - A BTree in a balanced tree - All the leaves are at equal distance from the root. - A parent node can have multiple children minimizing the tree’s depth
  • 20. @louisemeta B-Trees internal data structure - 2 Pages The root, the parents, and the leaves are all pages with the same structure. Pages have: - A block number (pointer) - A high key (defines the highest value found in a page) - Items @louisemeta !20
  • 21. @louisemeta B-Trees internal data structure - 4 Pages high key - Any item in the page will have a value lower or equal to the high key And in page 575, there is no high key as it’s the rightmost page. In page 3, I will find crocodiles with 16 or less teeth In page 289, with 31 and less @louisemeta !21
  • 22. @louisemeta B-Trees internal data structure - 5 Items An item contains: - A value (of the indexed row in the leaves, of the first row in the parents) - Pointer (to the row in the leaves, to the child page in the parents) @louisemeta !22
  • 23. @louisemeta To sum it up @louisemeta !23 Root High Key: None begin 16 31 Parent High Key: 16 begin 12 14 Parent High Key: 31 16 20 Parent High Key: None 31 33 Leave High Key: 12 Value: 1 Pointer: croco 10 Value: 1 Pointer: croco 12 Value : 2 Pointer: croco 23 … Value 10 Pointer: croco 1 Value: 11 Pointer: croco 2 Leave High Key: 14 Value: 12 Pointer: croco 17 Value: 13 Pointer: croco 3 Value : 13 Pointer: croco 4 … Value 13 Pointer: croco 27 … Leave High Key: None Value: 33 Pointer: croco 5 Value: 33 Pointer: croco 6 Value : 33 Pointer: croco 123 … Value: 38 Pointer: croco 26 - A Btree is a balanced tree - The values indexed are the values of the rows - Data is stored in pages - Pages have a high key defining the biggest value in the page - Pages have items pointing to an other page or the row.

  • 24. @louisemeta What are BTree good for? @louisemeta !24 Root High Key: None begin 16 31 Parent High Key: 16 begin 12 14 Parent High Key: 31 16 20 Parent High Key: None 31 33 Leave High Key: 12 Value: 1 Pointer: croco 10 Value: 1 Pointer: croco 12 Value : 2 Pointer: croco 23 … Value 10 Pointer: croco 1 Value: 11 Pointer: croco 2 Leave High Key: 14 Value: 12 Pointer: croco 17 Value: 13 Pointer: croco 3 Value : 13 Pointer: croco 4 … Value 13 Pointer: croco 27 … Leave High Key: None Value: 33 Pointer: croco 5 Value: 33 Pointer: croco 6 Value : 33 Pointer: croco 123 … Value: 38 Pointer: croco 26 BTrees are good for the following operations: =, >, <, >=, <= Why? Because the value indexed is the value of the column(s) so we can easily perform binary search in the BTree
  • 26. @louisemeta GIN - Used to index arrays, jsonb, and tsvector (for fulltext search) columns. - Efficient for <@, &&, @@@ operators New column healed_teeth:  croco=# SELECT email, number_of_teeth, healed_teeth FROM crocodile WHERE id =1; -[ RECORD 1 ]---+-------------------------------------------------------- email | louise.grandjonc1@croco.com number_of_teeth | 58 healed_teeth | {16,11,55,27,22,41,38,2,5,40,52,57,28,50,10,15,1,12,46} !26
  • 27. @louisemeta Creating a GIN index Here is how to create the GIN index for this column CREATE INDEX ON crocodile USING GIN(healed_teeth); !27 from django.contrib.postgres.indexes import GinIndex class Crocodile(models.Model): ... class Meta: indexes = [GinIndex(fields=['healed_teeth'])] Raw SQL
  • 28. @louisemeta GIN How is it different from a BTree? - In a GIN index, the array is split and each value is an entry - The values are unique - As the value is unique, in the leaves, we keep a list of pointers to the rows !28 Root Value: <begin> Value: 10 Value: 20 … Parent Value: 1 Value: 4 Value: 6 … Parent Value: 10 Value: 15 Value: 17 … Parent Value: 20 Value: 24 Value: 26 … Leaf Pointers: {(269, 49), (296, 51), (296, 54), (296, 57), …} Pointers: { (306, 33), (306, 35), (306,36), …} … Leaf Pointer to posting tree … Page Page Page Root Posting tree
  • 29. @louisemeta GIN How is it different from a BTree? Bitmap Heap Scan on crocodile (cost=516.59..6613.42 rows=54786 width=29) (actual time=15.960..38.197 rows=73275 loops=1) Recheck Cond: ('{1,2}'::integer[] <@ healed_teeth) Heap Blocks: exact=4218 -> Bitmap Index Scan on crocodile_healed_teeth_idx (cost=0.00..502.90 rows=54786 width=0) (actual time=15.302..15.302 rows=73275 loops=1) Index Cond: ('{1,2}'::integer[] <@ healed_teeth) Planning time: 0.124 ms Execution time: 41.018 ms (7 rows) Seq Scan on crocodile (cost=…) Filter: ('{1,2}'::integer[] <@ healed_teeth) Rows Removed by Filter: 250728 Planning time: 0.157 ms Execution time: 161.716 ms (5 rows) !29
  • 30. @louisemeta To sum it up @louisemeta !30 - A GIN index is a balanced tree - Each value in the tree is unique - The row value is split and each value is an entry - Efficient for <@, &&, @@@ operators
  • 32. @louisemeta GiST - keys Differences with a BTree index - Data isn’t ordered - The key ranges can overlap Which means that a same value can be inserted in different pages !32 To be more readable, the following example here is for a Integer Range type :) Root Page block number: 0 Page level: 0 Value: [3, 5] Value: [0, 2] Value: [4, 8] Value: [7, 9] Parent Page block number: 4699 Page level: 1 Parent Page block number: 1610 Page level: 1 Parent Page block number: 813 Page level: 1 Parent Page block number: 6249 Page level: 1
  • 33. @louisemeta Creating a GiST index !33 Here is how to create the GiST index for this column CREATE INDEX ON appointment USING GIST(schedule); from django.contrib.postgres.indexes import GistIndex class Appointment(models.Model): ... class Meta: indexes = [GistIndex(fields=[‘schedule'])] Raw SQL
  • 34. @louisemeta Why use GiST - Useful for overlapping (geometries, array, range etc.) - Especially useful when using postgis - Nearest neighbor - Can be used for full text search (tsvector, tsquery) !34
  • 35. @louisemeta GiST or GIN for fulltext search movies=# CREATE INDEX ON film USING GIN(fulltext) with (fastupdate=off); CREATE INDEX Time: 8.083 ms movies=# INSERT INTO film (title, description, language_id) VALUES ('Nightmare at the dentist', 'A crocodile calls his dentist on halloween and ends up toothless and very sad, warning: not for kids, or teeth-sensitive crocodiles', 1); INSERT 0 1 Time: 3.057 ms movies=# INSERT INTO film (title, description, language_id) VALUES ('Nightmare at the dentist', 'The terrible adventure of a crocodile who never goes to the dentist', 1); INSERT 0 1 Time: 1.323 ms - Maintaining a GIN index is slower than GiST !35
  • 36. @louisemeta GiST or GIN for fulltext search - Lookups are faster with GIN movies=# SELECT COUNT(*) FROM film WHERE fulltext @@ to_tsquery('crocodile'); count ------- 106 (1 row) Time: 1.275 ms movies=# SELECT COUNT(*) FROM film WHERE fulltext @@ to_tsquery('crocodile'); count ------- 106 (1 row) Time: 0.467 ms !36
  • 37. @louisemeta GiST or GIN for fulltext search - GIN indexes are larger than GiST movies=# di+ film_fulltext_idx List of relations Schema | Name | Type | Owner | Table | Size | Description --------+-------------------+-------+----------+-------+-------+------------- public | film_fulltext_idx | index | postgres | film | 88 kB | (1 row) movies=# di+ film_fulltext_gin_idx List of relations Schema | Name | Type | Owner | Table | Size | Description --------+-----------------------+-------+----------+-------+--------+------------- public | film_fulltext_gin_idx | index | postgres | film | 112 kB | (1 row) !37
  • 39. @louisemeta BRIN Internal data structure - Block Range Index - Not a balanced tree - Not even a tree - Block range: group of pages physically adjacent - For each block range: the range of values is stored - BRIN indexes are very small - Fast scanning on large tables !39
  • 40. @louisemeta BRIN Internal data structure SELECT * FROM brin_page_items(get_raw_page('appointment_created_at_idx', 2), 'appointment_created_at_idx'); itemoffset | blknum | attnum | allnulls | hasnulls | placeholder | value ------------+--------+--------+----------+----------+-------------+--------------------------------------------------- 1 | 0 | 1 | f | f | f | {2008-03-01 00:00:00-08 .. 2009-07-07 07:30:00-07} 2 | 128 | 1 | f | f | f | {2009-07-07 08:00:00-07 .. 2010-11-12 15:30:00-08} 3 | 256 | 1 | f | f | f | {2010-11-12 16:00:00-08 .. 2012-03-19 23:30:00-07} 4 | 384 | 1 | f | f | f | {2012-03-20 00:00:00-07 .. 2013-07-26 07:30:00-07} 5 | 512 | 1 | f | f | f | {2013-07-26 08:00:00-07 .. 2014-12-01 15:30:00-08} SELECT id, created_at FROM appointment WHERE ctid='(0, 1)'::tid; id | created_at --------+------------------------ 101375 | 2008-03-01 00:00:00-08 (1 row) !40
  • 41. @louisemeta BRIN Internal data structure SELECT * FROM brin_page_items(get_raw_page('crocodile_birthday_idx', 2), 'crocodile_birthday_idx'); itemoffset | blknum | attnum | allnulls | hasnulls | placeholder | value ------------+--------+--------+----------+----------+-------------+---------------------------- 1 | 0 | 1 | f | f | f | {1948-09-05 .. 2018-09-04} 2 | 128 | 1 | f | f | f | {1948-09-07 .. 2018-09-03} 3 | 256 | 1 | f | f | f | {1948-09-05 .. 2018-09-03} 4 | 384 | 1 | f | f | f | {1948-09-05 .. 2018-09-04} 5 | 512 | 1 | f | f | f | {1948-09-05 .. 2018-09-02} 6 | 640 | 1 | f | f | f | {1948-09-09 .. 2018-09-04} … (14 rows) In this case, the values in birthday has no correlation with the physical location, the index would not speed up the search as all pages would have to be visited. BRIN is interesting for data where the value is correlated with the physical location. !41
  • 42. @louisemeta BRIN Warning on DELETE and INSERT SELECT * FROM brin_page_items(get_raw_page('appointment_created_at_idx', 2), 'appointment_created_at_idx'); itemoffset | blknum | attnum | allnulls | hasnulls | placeholder | value ------------+--------+--------+----------+----------+-------------+--------------------------------------------------- 1 | 0 | 1 | f | f | f | {2008-03-01 00:00:00-08 .. 2018-07-01 07:30:00-07} 2 | 128 | 1 | f | f | f | {2009-07-07 08:00:00-07 .. 2018-07-01 23:30:00-07} 3 | 256 | 1 | f | f | f | {2010-11-12 16:00:00-08 .. 2012-03-19 23:30:00-07} 4 | 384 | 1 | f | f | f | {2012-03-20 00:00:00-07 .. 2018-07-06 23:30:00-07} DELETE FROM appointment WHERE created_at >= '2009-07-07' AND created_at < ‘2009-07-08'; DELETE FROM appointment WHERE created_at >= '2012-03-20' AND created_at < ‘2012-03-25'; Deleted and then vacuum on the appointment table New rows are inserted in the free space after VACUUM BRIN index has some ranges with big data ranges. Search will visit a lot of pages. !42
  • 43. @louisemeta Creating a BRIN index !43 Here is how to create the BRIN index for this column CREATE INDEX ON crocodile USING BRIN(created_at); from django.contrib.postgres.indexes import BrinIndeex class Crocodile(models.Model): ... class Meta: indexes = [BrinIndex(fields=['created_at'])] Raw SQL
  • 44. @louisemeta Conclusion - B-Tree - Great for <, >, =, >=, <= - GIN - Fulltext search, jsonb, arrays - Inserts can be slow because of unicity of the keys - GiST - Great for overlapping - Using key class functions - Can be implemented for any data type - BRIN - Great for huge table with correlation between value and physical location - <, >, =, >=, <= !44
  • 45. @louisemeta Questions Thanks for your attention Go read the articles www.louisemeta.com Now only the ones on BTrees and GIN are published, but I’ll announce the rest on twitter @louisemeta Crocodiles by https://www.instagram.com/zimmoriarty/?hl=en !45