SlideShare una empresa de Scribd logo
1 de 66
CS 542 Database Management Systems Query Optimization J Singh  March 28, 2011
Outline Convert SQL query to a parse tree Semantic checking: attributes, relation names, types Convert to a logical query plan (relational algebra expression) deal with subqueries Improve the logical query plan use algebraic transformations group together certain operators evaluate logical plan based on estimated size of relations  Convert to a physical query plan search the space of physical plans  choose order of operations complete the physical query plan
Desired Endpoint  x=1 AND y=2 AND z<5 (R) R ⋈ S ⋈ U Example Physical Query Plans two-pass hash-join 101 buffers Filter(x=1 AND z<5) materialize IndexScan(R,y=2) two-pass hash-join 101 buffers TableScan(U) TableScan(R) TableScan(S)
Physical Plan Selection The particular operation being performed Size of intermediate results, as derived last week (sec 16.4 of book) Physical Operator Implementation used,  e.g., one- or two-pass Operation ordering,  esp. Join ordering Operation output: materialized or pipelined. Governed by disk I/O, which in turn is governed by
Index-based physical plans (p1) Selection example. What is the cost of a=v(R) assuming B(R) = 2000 T(R) = 100,000 V(R, a) = 20 Table scan (assuming R is clustered): B(R) = 2,000 I/Os Index based selection: If index is clustering: 	B(R) / V(R,a) = 100 I/Os If index is unclustered: 	T(R) / V(R,a) = 5,000 I/Os For small V(R, a), table scan can be faster than an unclustered index Heuristics that pick indexed over not-indexed can lead you astray Determine the cost of both methods and let the algorithm decide 5
Index-based physical plans (p2) Example: Join if S has an index on the join attribute For each tuplein R, fetch corresponding tuple(s) from S Assume R is clustered. Cost: If index on S is clustering:	B(R) + T(R) B(S) / V(S,a) If index on S is unclustered:	B(R) + T(R) T(S) / V(S,a) Another case: when R is output of another Iterator. Cost: B(R) is accounted for in the iterator If index on S is clustering:	T(R) B(S) / V(S,a) If index on S is unclustered:	T(R) T(S) / V(S,a) If S is not indexed but fits in memory: B(S) A number of other cases
Index-based physical plans (p3) Index Based Join ifboth R and S have a sorted index (B+ tree) on the join attribute Then perform a merge join  called zig-zag join Cost: B(R) + B(S)
Grand Summary of Physical Plans (p1) Scans and Selects Index: N = None, C = Clustering, NC = Non-clustered
Grand Summary of Physical Plans (p2) Joins Index: N = None, C = Clustering, NC = Non-clustered Relation fits in memory: F = Yes, NF = No
Physical plans at non-leaf Operators (p1) What if the input of the operator is from another operator? For Select, cost= 0. Cost of pipelining is assumed to be zero The number of tuples emitted is reduced For Join, when R is from an operator and S from a table: B(R) is accounted for in the iterator If index on S is clustering:	T(R) B(S) / V(S,a) If index on S is unclustered:	T(R) T(S) / V(S,a) If S is not indexed but fits in memory: B(S) If S is not indexed and doesn’t fit: k*B(S) for k chunks If S is not indexed and doesn’t fit: 3*B(S) for sort- or hash-join
Physical plans at non-leaf Operators (p2) For Join, when R and S are both from operators, cost depends on whether the result are sorted by the Join attribute(s) If yes, we use the zig-zag algorithm and the cost is zero. Why? If either relation will fit in memory, the cost is zero. Why? At most, the cost is 2*(B(R) + B(S)). Why?
Example (787) Product(pname, maker), Company(cname, city) Select Product.pname From  Product, Company Where Product.maker=Company.cname     and  Company.city = “Seattle” How do we execute this query ?
Example (787) Product(pname, maker), Company(cname, city) Select Product.pname From  Product, Company Where Product.maker=Company.cname     and  Company.city = “Seattle” Logical Plan Clustering Indices: Product.pname Company.cname Unclustered Indices: Product.maker Company.city maker=cname scity=“Seattle” Product(pname,maker) Company(cname,city)
Example (787) Physical Plans Physical Plan 1 Physical Plans 2a and 2b Merge-join Index-basedjoin Index-basedselection maker=cname scity=“Seattle” cname=maker scity=“Seattle” Product(pname,maker) Company(cname,city) Product(pname,maker) Company(cname,city) Index-scan Scan and sort (2a)index scan (2b)
Evaluate (787) Physical Plans Physical Plan 1 Tuples: T(city='Seattle'(Company)) = T(Company) / V(Company, City) Cost: T(city='Seattle'(Company))  * T(Product) / V(Product, maker) or, simplifying, T(Company) / V(Company, City) * T(Product) / V(Product, maker) Total Cost: 2a: 3B(Product) + B(Company) 2b: T(Product) + B(Company) Merge-join maker=cname scity=“Seattle” Product(pname,maker) Company(cname,city) Index-scan Scan and sort (2a)index scan (2b)
Final Evaluation Plan Costs: Plan 1: T(Company) / V(Company, city)  T(Product)/V(Product, maker) Plan 2a: B(Company) + 3B(Product) Plan 2b: B(Company) + T(Product) Which is better? It depends on the data
Example (787) Evaluation Results Common assumptions: T(Company) = 5,000	B(Company) = 500	M = 100 T(Product) = 100,000	B(Product) = 1,000 Assume V(Product, maker)  T(Company) Case 2:  V(Company, city) << T(Company) V(Company, city) = 20 Plan 1:  250  20 = 5,000 Plan 2a: 3,500 Plan 2b: 100,500 Case 1:  V(Company, city)  T(Company) V(Company, city) = 5,000  Plan 1:   1  20 = 20 Plan 2a: 3,500 Plan 2b: 100,500 Reference from previous page: ,[object Object]
Plan 2a: B(Company) + 3B(Product)
Plan 2b: B(Company) + T(Product),[object Object]
Query Optimzation Have a SQL query Q Create a plan P Find equivalent plans P = P’ = P’’ = …  Choose the “cheapest”.   HOW ??
Logical Query Plan SELECT  P.buyer FROM Purchase P, Person Q WHERE P.buyer=Q.name AND Q.city=‘seattle’ AND  Q.phone > ‘5430000’  Plan buyer  City=‘seattle’   phone>’5430000’ Buyer=name In class: find a “better” plan P’ Person Purchase
CS 542 Database Management Systems Query Optimization – Choosing the Order of Operations J Singh  March 28, 2011
Outline Convert SQL query to a parse tree Semantic checking: attributes, relation names, types Convert to a logical query plan (relational algebra expression) deal with subqueries Improve the logical query plan use algebraic transformations group together certain operators evaluate logical plan based on estimated size of relations  Convert to a physical query plan search the space of physical plans  choose order of operations complete the physical query plan
Join Trees Recall that the following are equivalent: ,[object Object]
R ⋈ (S ⋈ U)
(R ⋈ S) ⋈ U
S ⋈ (R ⋈ U)
But they are not equivalent from an execution viewpoint.Considerable research has gone into picking the best order for Joins
Join Trees R1 ⋈R2 ⋈ …⋈Rn Join tree: Definitions A plan = a join tree A partial plan = a subtree of a join tree R3 R1 R2 R4 24
Left & Right Join Arguments The argument relations in joins determine the cost of the join In Physical Query Plans, the left argument of the join is  Called the build relation Assumed to be smaller Stored in main-memory
Left & Right Join Arguments The right argument of the join is Called the probe relation  Read a block at a time Its tuples are matched with those of build relation The join algorithms which distinguish between the arguments are: One-pass join Nested-loop join Index join
Types of Join Trees ,[object Object],Left deep: Bushy R3 R4 R1 R2 R5 R3 R2 R4 R5 R2 R4 R3 R1 Many different orders, very important to pick the right one R5 R1
Optimization Algorithms Heuristic based Cost based Dynamic programming: System R Rule-based optimizations: DB2, SQL-Server
Dynamic Programming Given: a query  R1 ⋈R2 ⋈… ⋈Rn Assume we have a function cost() that gives us the cost of a join tree Find the best join tree for the query
Dynamic Programming Problem Statement Given: a query  R1 ⋈ R2 ⋈… ⋈Rn Assume we have a function cost() that gives us the cost of a join tree Find the best join tree for the query Idea: for each subset of {R1, …, Rn}, compute the best plan for that subset Algorithm: In increasing order of set cardinality, compute the cost for Step 1: for {R1}, {R2}, …, {Rn} Step 2: for {R1,R2}, {R1,R3}, …, {Rn-1, Rn} … Step n: for {R1, …, Rn} It is a bottom-up strategy Skipping further details of the algorithm Read from book if interested Will not be on the exam
Dynamic Programming Algorithm ,[object Object],Best Plan (R1 ⋈ R2 ⋈ … ⋈ Rn) = min cost plan of ,[object Object]
Best Plan (R1 ⋈ R3 ⋈ … ⋈ Rn) ⋈ R2
…
Best Plan (R1 ⋈ R2 ⋈ … ⋈ Rn-1) ⋈ Rn,[object Object]
Outline Convert SQL query to a parse tree Semantic checking: attributes, relation names, types Convert to a logical query plan (relational algebra expression) deal with subqueries Improve the logical query plan use algebraic transformations group together certain operators evaluate logical plan based on estimated size of relations  Convert to a physical query plan search the space of physical plans  choose order of operations complete the physical query plan Three topics Choosing the physical implementations (e.g., select and join methods) Decisions regarding materialized vs pipelined Notation for physical query plans
Choosing a Selection Method Algorithm for each selection operator 1. Can we use an created index on an attribute? If yes, index-scan. (Otherwise table-scan) 2. After retrieving all condition-satisfied tuples in (1), filter them with the remaining selection conditions In other words, When computing c1  c2  …  cn(R), we index-scan on ci, then filter the result on all other ci, where j  i. The next 2 pages show an example where we examine several options and pick the best one
Selection Method Example (p1) Selection: x=1  y=2  z < 5 (R) Where parameters of R are: 	T(R) = 5,000		B(R) = 200 	V(R, x) = 100		V(R, y) = 500 Relation R is clustered x and y have non-clustering indices z is a clustering index
Selection Method Example (p2) Selection options: Table-scan  filter x, y, z.  Cost isB(R) =  200since R is clustered. Use index on x =1  filter on y, z.   Cost is 50 sinceT(R) / V(R, x) is (5000/100) = 50 tuples, x is not clustering. Use index on y =2  filter on x, z.   Cost is 10 sinceT(R) / V(R, y) is (5000/500) = 10 tuples, y is not clustering. Index-scan on clustering index w/ z < 5 filter x ,y.   Cost is about B(R)/3 = 67 Therefore: First retrieve all tuples with y = 2 (option 3) Then filter for x and z
Outline Convert SQL query to a parse tree Semantic checking: attributes, relation names, types Convert to a logical query plan (relational algebra expression) deal with subqueries Improve the logical query plan use algebraic transformations group together certain operators evaluate logical plan based on estimated size of relations  Convert to a physical query plan search the space of physical plans  choose order of operations complete the physical query plan Three topics Choosing the physical implementations (e.g., select and join methods) Decisions regarding materialized vs pipelined Notation for physical query plans
Pipelining Versus Materialization Materialization store (intermediate) result of each operations on disk  Pipelining Interleave the execution of several operations, the tuples produced by one operation are passed directly to the operations that used it store (intermediate) result of each operations on buffer, which is implemented on main memory Prefer Pipelining where possible Sometimes not possible, as the following example shows Next few pages, a fully worked-out example
R⋈S⋈U Example (p1) Consider physical query plan for the expression (R(w, x) ⋈ S(x, y)) ⋈ U(y, z) Assumption R occupies 5,000 blocks, S and U each 10,000 blocks. The intermediate result R ⋈ S occupies k blocks for some k. Both joins will be implemented as hash-joins, either one-pass or two-pass depending on k There are 101 buffers available.
R⋈S⋈U Example (p2) When joining R ⋈ S, neither relation fits in buffers Need two-pass hash-join to partition R How many hash buckets for R? 100 at most The 2nd pass hash-join uses 51 buffers, leaving 50 buffers for joining result of R ⋈ S with U. Why 51?
R⋈S⋈U Example (p3) Case 1: Suppose k 49, the result of R ⋈ S occupies at most 49 blocks.  Steps  Pipeline in R ⋈ S into 49 buffers Organize them for lookup as a hash table Use one buffer left to read each block of U in turn Execute the second join as one-pass join. The total number of I/O’s is 55,000 45,000 for two-pass hash join of R and S 10,000 to read U for one-pass hash join of (R⋈ S) ⋈U.
R⋈S⋈U Example (p4) Case 2: suppose k > 49 but < 5,000, we can still pipeline, but need another strategy where intermediate results join with U in a 50-bucket, two-pass hash-join. Steps are: Before start on R ⋈ S, we hash U into 50 buckets of 200 blocks each. Perform two-pass hash join of R and U using 51 buffers as case 1, and placing results in 50 remaining buffers to form 50 buckets for the join of  R ⋈ S with U. Finally, join R ⋈ S with U bucket by bucket.  The number of disk I/O’s is: 20,000 to read U and write its tuples into buckets 45,000 for two-pass hash-join R ⋈ S k to write out the buckets of R ⋈ S k+10,000 to read the buckets of R ⋈ S and U in the final join The total cost is 75,000+2k.
R⋈S⋈U Example (p5) Case 3: k > 5,000, we cannot perform two-pass join in 50 buffers available if result of R ⋈ S is pipelined. We are forced to materialize the relation R ⋈ S.  The number of disk I/O’s is: 45,000 for two-pass hash-join R and S k to store R ⋈ S on disk 30,000 + 3k for two-pass join of U in R ⋈ S The total cost is 75,000+4k.
R⋈S⋈U Example (p6) In summary, costs of physical plan as function of R ⋈ S size. Pause and Reflect It’s all about the expected size of the intermediate result R ⋈ S What would have happened if  We guessed 45 but had 55? 		Guessed 55 but only had 45? Guessed 4,500 but had 5,500?	Guessed 5,500 but only had 4,500?
Outline Convert SQL query to a parse tree Semantic checking: attributes, relation names, types Convert to a logical query plan (relational algebra expression) deal with subqueries Improve the logical query plan use algebraic transformations group together certain operators evaluate logical plan based on estimated size of relations  Convert to a physical query plan search the space of physical plans  choose order of operations complete the physical query plan Three topics Choosing the physical implementations (e.g., select and join methods) Decisions regarding materialized vs pipelined Notation for physical query plans
Notation for Physical Query Plans Several types of operators:  Operators for leaves (Physical) operators for Selection (Physical) Sorts Operators Other Relational-Algebra Operations In practice, each DBMS uses its own internal notation for physical query plans
PQP Notation Leaves:Replace a leaf in an LQP by TableScan(R): Read all blocks SortScan(R, L): Read in order according to L IndexScan(R, C): Scan R using index attribute A by condition AC IndexScan(R, A): Scan R using index attribute A Selects: Replace a Select in an LQP by one of the leaf operators plus: Filter(D) for condition D Sorts: Replace a leaf-level sort as shown above. For other operation, Sort(L): Sort a relation that is not stored Other Operators: Operation- and algorithm-specific (e.g., Hash-Join) Also need to specify # passes, buffer sizes, etc.
We have Arrived at the Desired Endpoint  x=1 AND y=2 AND z<5 (R) R ⋈ S ⋈ U Example Physical Query Plans two-pass hash-join 101 buffers Filter(x=1 AND z<5) materialize IndexScan(R,y=2) two-pass hash-join 101 buffers TableScan(U) TableScan(R) TableScan(S)
Outline Convert SQL query to a parse tree Semantic checking: attributes, relation names, types Convert to a logical query plan (relational algebra expression) deal with subqueries Improve the logical query plan use algebraic transformations group together certain operators evaluate logical plan based on estimated size of relations  Convert to a physical query plan search the space of physical plans  choose order of operations complete the physical query plan
Optimization Issues and Proposals The “fuzz” in estimation of sizes Parametric Query Optimization Specify alternatives to the execution engine so it may respond to conditions at runtime Multiple-query optimization Take concurrent execution of several queries into account Combinatoric explosion of options when doing an n-way Join Becomes really expensive around n > 15 Alternatives optimizations have been proposed for special situations, but no general framework Rule-based optimizers Randomized plan generation
CS 542 Database Management Systems Distributed Query Execution Source: Carsten Binnig, Univ of Zurich, 2006 J Singh  March 28, 2011
Motivation Algorithms based on Semi-Joins have been proposed as techniques for query optimization They shine in Distributed and Parallel Databases Good opportunity to explore them in that context Semi-join by example: Semi-join formal definition:
Distributed / Parallel Join Processing Scenario: How to compute A ⋈B? Table A resides on Node 1 Table B resides on Node 2 Node 1 Node 2 Table A Table B
Naïve approach (1) Idea: Use standard join and fetch table page-wise from remote node if necessary (send- and receive-operators) Example: Join is executed on node 2 using a Nested-Loop-Join Outer loop: Request page of table A from node 1 (remote) Inner loop: For each page iterate over table B and produce output => Random access of pages on node 1 (due to network delay) Node 1 Node 2 Request Table A Page A1 Table B Send
Naïve approach (2) Idea: Ship one table completely to the other node Example: Ship complete table A from node 1 to node 2 Join table A and B locally on node 2 ,[object Object],Node 1 Node 2 Table A Table A Table B Ship
Naïve Approach: Implications Problems: High cost for shipping data Network cost roughly the same as I/O cost for a hard disk (or even worse because of unpredictability of network delay) Shipping A roughly equivalent to a full table scan (Trivial) Optimizations: Ship always smaller table to the other side If query contains a selection, apply selection before sending A Note: bigger table may become the smaller table (after selection)
Semi-join Approach (p1) Idea: Before shipping a table, reduce to data that is shipped to those tuples that are only relevant for join Example: Join on A.id=B.id and table A should be shipped to node 2 Node 1 Node 2 Table A Table B
Semi-join Approach (p2) (1) Compute projection B.id of table B on node 2 (2) Ship column B.id to node 1 Node 1 Node 2 Table A Table B Ship
Semi-join Approach (p3) (3) Execute semi-join of B.id and table A on A.id=B.id (to select only relevant tuples of table A => table A’) (4) Send result of semi-join (table A’) to node 2 Node 1 Node 2 Table A Table B Table A’ Ship

Más contenido relacionado

La actualidad más candente

Query optimization
Query optimizationQuery optimization
Query optimizationdixitdavey
 
Database , 8 Query Optimization
Database , 8 Query OptimizationDatabase , 8 Query Optimization
Database , 8 Query OptimizationAli Usman
 
8 query processing and optimization
8 query processing and optimization8 query processing and optimization
8 query processing and optimizationKumar
 
14. Query Optimization in DBMS
14. Query Optimization in DBMS14. Query Optimization in DBMS
14. Query Optimization in DBMSkoolkampus
 
SQL: Query optimization in practice
SQL: Query optimization in practiceSQL: Query optimization in practice
SQL: Query optimization in practiceJano Suchal
 
Query-porcessing-& Query optimization
Query-porcessing-& Query optimizationQuery-porcessing-& Query optimization
Query-porcessing-& Query optimizationSaranya Natarajan
 
Distributed Query Processing
Distributed Query ProcessingDistributed Query Processing
Distributed Query ProcessingMythili Kannan
 
Query Optimization - Brandon Latronica
Query Optimization - Brandon LatronicaQuery Optimization - Brandon Latronica
Query Optimization - Brandon Latronica"FENG "GEORGE"" YU
 
Query Decomposition and data localization
Query Decomposition and data localization Query Decomposition and data localization
Query Decomposition and data localization Hafiz faiz
 
Query optimization
Query optimizationQuery optimization
Query optimizationNeha Behl
 
Query processing-and-optimization
Query processing-and-optimizationQuery processing-and-optimization
Query processing-and-optimizationWBUTTUTORIALS
 
Query processing and optimization (updated)
Query processing and optimization (updated)Query processing and optimization (updated)
Query processing and optimization (updated)Ravinder Kamboj
 
Data structures and algorithms
Data structures and algorithmsData structures and algorithms
Data structures and algorithmsJulie Iskander
 

La actualidad más candente (20)

Query optimisation
Query optimisationQuery optimisation
Query optimisation
 
Query optimization
Query optimizationQuery optimization
Query optimization
 
Database , 8 Query Optimization
Database , 8 Query OptimizationDatabase , 8 Query Optimization
Database , 8 Query Optimization
 
8 query processing and optimization
8 query processing and optimization8 query processing and optimization
8 query processing and optimization
 
14. Query Optimization in DBMS
14. Query Optimization in DBMS14. Query Optimization in DBMS
14. Query Optimization in DBMS
 
SQL: Query optimization in practice
SQL: Query optimization in practiceSQL: Query optimization in practice
SQL: Query optimization in practice
 
Query-porcessing-& Query optimization
Query-porcessing-& Query optimizationQuery-porcessing-& Query optimization
Query-porcessing-& Query optimization
 
Distributed Query Processing
Distributed Query ProcessingDistributed Query Processing
Distributed Query Processing
 
Query processing
Query processingQuery processing
Query processing
 
Query Optimization - Brandon Latronica
Query Optimization - Brandon LatronicaQuery Optimization - Brandon Latronica
Query Optimization - Brandon Latronica
 
Query Decomposition and data localization
Query Decomposition and data localization Query Decomposition and data localization
Query Decomposition and data localization
 
Query optimization
Query optimizationQuery optimization
Query optimization
 
Query processing-and-optimization
Query processing-and-optimizationQuery processing-and-optimization
Query processing-and-optimization
 
Join operation
Join operationJoin operation
Join operation
 
Query processing and optimization (updated)
Query processing and optimization (updated)Query processing and optimization (updated)
Query processing and optimization (updated)
 
Query processing System
Query processing SystemQuery processing System
Query processing System
 
Distributed DBMS - Unit 6 - Query Processing
Distributed DBMS - Unit 6 - Query ProcessingDistributed DBMS - Unit 6 - Query Processing
Distributed DBMS - Unit 6 - Query Processing
 
Ch13
Ch13Ch13
Ch13
 
Data structures and algorithms
Data structures and algorithmsData structures and algorithms
Data structures and algorithms
 
U nit i data structure-converted
U nit   i data structure-convertedU nit   i data structure-converted
U nit i data structure-converted
 

Destacado

Query Processing and Optimisation - Lecture 10 - Introduction to Databases (1...
Query Processing and Optimisation - Lecture 10 - Introduction to Databases (1...Query Processing and Optimisation - Lecture 10 - Introduction to Databases (1...
Query Processing and Optimisation - Lecture 10 - Introduction to Databases (1...Beat Signer
 
What Is SQL?
What Is SQL?What Is SQL?
What Is SQL?QATestLab
 
Query Optimization with MySQL 5.6: Old and New Tricks - Percona Live London 2013
Query Optimization with MySQL 5.6: Old and New Tricks - Percona Live London 2013Query Optimization with MySQL 5.6: Old and New Tricks - Percona Live London 2013
Query Optimization with MySQL 5.6: Old and New Tricks - Percona Live London 2013Jaime Crespo
 
What Your Database Query is Really Doing
What Your Database Query is Really DoingWhat Your Database Query is Really Doing
What Your Database Query is Really DoingDave Stokes
 
Database performance tuning and query optimization
Database performance tuning and query optimizationDatabase performance tuning and query optimization
Database performance tuning and query optimizationDhani Ahmad
 
SQL Joins and Query Optimization
SQL Joins and Query OptimizationSQL Joins and Query Optimization
SQL Joins and Query OptimizationBrian Gallagher
 
Database Architecture and Basic Concepts
Database Architecture and Basic ConceptsDatabase Architecture and Basic Concepts
Database Architecture and Basic ConceptsTony Wong
 

Destacado (9)

Query Processing and Optimisation - Lecture 10 - Introduction to Databases (1...
Query Processing and Optimisation - Lecture 10 - Introduction to Databases (1...Query Processing and Optimisation - Lecture 10 - Introduction to Databases (1...
Query Processing and Optimisation - Lecture 10 - Introduction to Databases (1...
 
project in DBMS
project in DBMSproject in DBMS
project in DBMS
 
What Is SQL?
What Is SQL?What Is SQL?
What Is SQL?
 
Query Optimization with MySQL 5.6: Old and New Tricks - Percona Live London 2013
Query Optimization with MySQL 5.6: Old and New Tricks - Percona Live London 2013Query Optimization with MySQL 5.6: Old and New Tricks - Percona Live London 2013
Query Optimization with MySQL 5.6: Old and New Tricks - Percona Live London 2013
 
Chapter 11new
Chapter 11newChapter 11new
Chapter 11new
 
What Your Database Query is Really Doing
What Your Database Query is Really DoingWhat Your Database Query is Really Doing
What Your Database Query is Really Doing
 
Database performance tuning and query optimization
Database performance tuning and query optimizationDatabase performance tuning and query optimization
Database performance tuning and query optimization
 
SQL Joins and Query Optimization
SQL Joins and Query OptimizationSQL Joins and Query Optimization
SQL Joins and Query Optimization
 
Database Architecture and Basic Concepts
Database Architecture and Basic ConceptsDatabase Architecture and Basic Concepts
Database Architecture and Basic Concepts
 

Similar a CS 542 -- Query Optimization

Similar a CS 542 -- Query Optimization (20)

Module 2 - part i
Module   2 - part iModule   2 - part i
Module 2 - part i
 
relational algebra and calculus queries .ppt
relational algebra and calculus queries .pptrelational algebra and calculus queries .ppt
relational algebra and calculus queries .ppt
 
Ch3 a
Ch3 aCh3 a
Ch3 a
 
Ch3
Ch3Ch3
Ch3
 
Compilation
CompilationCompilation
Compilation
 
Ch14
Ch14Ch14
Ch14
 
3.ppt
3.ppt3.ppt
3.ppt
 
Lllll
LllllLllll
Lllll
 
3. Relational Models in DBMS
3. Relational Models in DBMS3. Relational Models in DBMS
3. Relational Models in DBMS
 
DBMS CS3
DBMS CS3DBMS CS3
DBMS CS3
 
Lecture 06 relational algebra and calculus
Lecture 06 relational algebra and calculusLecture 06 relational algebra and calculus
Lecture 06 relational algebra and calculus
 
R Language Introduction
R Language IntroductionR Language Introduction
R Language Introduction
 
Lecture_R.ppt
Lecture_R.pptLecture_R.ppt
Lecture_R.ppt
 
relational model in Database Management.ppt.ppt
relational model in Database Management.ppt.pptrelational model in Database Management.ppt.ppt
relational model in Database Management.ppt.ppt
 
ch2
ch2ch2
ch2
 
Cs501 rel algebra
Cs501 rel algebraCs501 rel algebra
Cs501 rel algebra
 
DBMS ArchitectureQuery ExecutorBuffer ManagerStora
DBMS ArchitectureQuery ExecutorBuffer ManagerStoraDBMS ArchitectureQuery ExecutorBuffer ManagerStora
DBMS ArchitectureQuery ExecutorBuffer ManagerStora
 
Relational algebra.pptx
Relational algebra.pptxRelational algebra.pptx
Relational algebra.pptx
 
IR-ranking
IR-rankingIR-ranking
IR-ranking
 
Dbms module ii
Dbms module iiDbms module ii
Dbms module ii
 

Más de J Singh

OpenLSH - a framework for locality sensitive hashing
OpenLSH  - a framework for locality sensitive hashingOpenLSH  - a framework for locality sensitive hashing
OpenLSH - a framework for locality sensitive hashingJ Singh
 
Designing analytics for big data
Designing analytics for big dataDesigning analytics for big data
Designing analytics for big dataJ Singh
 
Open LSH - september 2014 update
Open LSH  - september 2014 updateOpen LSH  - september 2014 update
Open LSH - september 2014 updateJ Singh
 
PaaS - google app engine
PaaS  - google app enginePaaS  - google app engine
PaaS - google app engineJ Singh
 
Mining of massive datasets using locality sensitive hashing (LSH)
Mining of massive datasets using locality sensitive hashing (LSH)Mining of massive datasets using locality sensitive hashing (LSH)
Mining of massive datasets using locality sensitive hashing (LSH)J Singh
 
Data Analytic Technology Platforms: Options and Tradeoffs
Data Analytic Technology Platforms: Options and TradeoffsData Analytic Technology Platforms: Options and Tradeoffs
Data Analytic Technology Platforms: Options and TradeoffsJ Singh
 
Facebook Analytics with Elastic Map/Reduce
Facebook Analytics with Elastic Map/ReduceFacebook Analytics with Elastic Map/Reduce
Facebook Analytics with Elastic Map/ReduceJ Singh
 
Big Data Laboratory
Big Data LaboratoryBig Data Laboratory
Big Data LaboratoryJ Singh
 
The Hadoop Ecosystem
The Hadoop EcosystemThe Hadoop Ecosystem
The Hadoop EcosystemJ Singh
 
Social Media Mining using GAE Map Reduce
Social Media Mining using GAE Map ReduceSocial Media Mining using GAE Map Reduce
Social Media Mining using GAE Map ReduceJ Singh
 
High Throughput Data Analysis
High Throughput Data AnalysisHigh Throughput Data Analysis
High Throughput Data AnalysisJ Singh
 
NoSQL and MapReduce
NoSQL and MapReduceNoSQL and MapReduce
NoSQL and MapReduceJ Singh
 
CS 542 -- Concurrency Control, Distributed Commit
CS 542 -- Concurrency Control, Distributed CommitCS 542 -- Concurrency Control, Distributed Commit
CS 542 -- Concurrency Control, Distributed CommitJ Singh
 
CS 542 -- Failure Recovery, Concurrency Control
CS 542 -- Failure Recovery, Concurrency ControlCS 542 -- Failure Recovery, Concurrency Control
CS 542 -- Failure Recovery, Concurrency ControlJ Singh
 
CS 542 -- Query Execution
CS 542 -- Query ExecutionCS 542 -- Query Execution
CS 542 -- Query ExecutionJ Singh
 
CS 542 Putting it all together -- Storage Management
CS 542 Putting it all together -- Storage ManagementCS 542 Putting it all together -- Storage Management
CS 542 Putting it all together -- Storage ManagementJ Singh
 
CS 542 Parallel DBs, NoSQL, MapReduce
CS 542 Parallel DBs, NoSQL, MapReduceCS 542 Parallel DBs, NoSQL, MapReduce
CS 542 Parallel DBs, NoSQL, MapReduceJ Singh
 
CS 542 Database Index Structures
CS 542 Database Index StructuresCS 542 Database Index Structures
CS 542 Database Index StructuresJ Singh
 
CS 542 Controlling Database Integrity and Performance
CS 542 Controlling Database Integrity and PerformanceCS 542 Controlling Database Integrity and Performance
CS 542 Controlling Database Integrity and PerformanceJ Singh
 
CS 542 Overview of query processing
CS 542 Overview of query processingCS 542 Overview of query processing
CS 542 Overview of query processingJ Singh
 

Más de J Singh (20)

OpenLSH - a framework for locality sensitive hashing
OpenLSH  - a framework for locality sensitive hashingOpenLSH  - a framework for locality sensitive hashing
OpenLSH - a framework for locality sensitive hashing
 
Designing analytics for big data
Designing analytics for big dataDesigning analytics for big data
Designing analytics for big data
 
Open LSH - september 2014 update
Open LSH  - september 2014 updateOpen LSH  - september 2014 update
Open LSH - september 2014 update
 
PaaS - google app engine
PaaS  - google app enginePaaS  - google app engine
PaaS - google app engine
 
Mining of massive datasets using locality sensitive hashing (LSH)
Mining of massive datasets using locality sensitive hashing (LSH)Mining of massive datasets using locality sensitive hashing (LSH)
Mining of massive datasets using locality sensitive hashing (LSH)
 
Data Analytic Technology Platforms: Options and Tradeoffs
Data Analytic Technology Platforms: Options and TradeoffsData Analytic Technology Platforms: Options and Tradeoffs
Data Analytic Technology Platforms: Options and Tradeoffs
 
Facebook Analytics with Elastic Map/Reduce
Facebook Analytics with Elastic Map/ReduceFacebook Analytics with Elastic Map/Reduce
Facebook Analytics with Elastic Map/Reduce
 
Big Data Laboratory
Big Data LaboratoryBig Data Laboratory
Big Data Laboratory
 
The Hadoop Ecosystem
The Hadoop EcosystemThe Hadoop Ecosystem
The Hadoop Ecosystem
 
Social Media Mining using GAE Map Reduce
Social Media Mining using GAE Map ReduceSocial Media Mining using GAE Map Reduce
Social Media Mining using GAE Map Reduce
 
High Throughput Data Analysis
High Throughput Data AnalysisHigh Throughput Data Analysis
High Throughput Data Analysis
 
NoSQL and MapReduce
NoSQL and MapReduceNoSQL and MapReduce
NoSQL and MapReduce
 
CS 542 -- Concurrency Control, Distributed Commit
CS 542 -- Concurrency Control, Distributed CommitCS 542 -- Concurrency Control, Distributed Commit
CS 542 -- Concurrency Control, Distributed Commit
 
CS 542 -- Failure Recovery, Concurrency Control
CS 542 -- Failure Recovery, Concurrency ControlCS 542 -- Failure Recovery, Concurrency Control
CS 542 -- Failure Recovery, Concurrency Control
 
CS 542 -- Query Execution
CS 542 -- Query ExecutionCS 542 -- Query Execution
CS 542 -- Query Execution
 
CS 542 Putting it all together -- Storage Management
CS 542 Putting it all together -- Storage ManagementCS 542 Putting it all together -- Storage Management
CS 542 Putting it all together -- Storage Management
 
CS 542 Parallel DBs, NoSQL, MapReduce
CS 542 Parallel DBs, NoSQL, MapReduceCS 542 Parallel DBs, NoSQL, MapReduce
CS 542 Parallel DBs, NoSQL, MapReduce
 
CS 542 Database Index Structures
CS 542 Database Index StructuresCS 542 Database Index Structures
CS 542 Database Index Structures
 
CS 542 Controlling Database Integrity and Performance
CS 542 Controlling Database Integrity and PerformanceCS 542 Controlling Database Integrity and Performance
CS 542 Controlling Database Integrity and Performance
 
CS 542 Overview of query processing
CS 542 Overview of query processingCS 542 Overview of query processing
CS 542 Overview of query processing
 

CS 542 -- Query Optimization

  • 1. CS 542 Database Management Systems Query Optimization J Singh March 28, 2011
  • 2. Outline Convert SQL query to a parse tree Semantic checking: attributes, relation names, types Convert to a logical query plan (relational algebra expression) deal with subqueries Improve the logical query plan use algebraic transformations group together certain operators evaluate logical plan based on estimated size of relations Convert to a physical query plan search the space of physical plans choose order of operations complete the physical query plan
  • 3. Desired Endpoint  x=1 AND y=2 AND z<5 (R) R ⋈ S ⋈ U Example Physical Query Plans two-pass hash-join 101 buffers Filter(x=1 AND z<5) materialize IndexScan(R,y=2) two-pass hash-join 101 buffers TableScan(U) TableScan(R) TableScan(S)
  • 4. Physical Plan Selection The particular operation being performed Size of intermediate results, as derived last week (sec 16.4 of book) Physical Operator Implementation used, e.g., one- or two-pass Operation ordering, esp. Join ordering Operation output: materialized or pipelined. Governed by disk I/O, which in turn is governed by
  • 5. Index-based physical plans (p1) Selection example. What is the cost of a=v(R) assuming B(R) = 2000 T(R) = 100,000 V(R, a) = 20 Table scan (assuming R is clustered): B(R) = 2,000 I/Os Index based selection: If index is clustering: B(R) / V(R,a) = 100 I/Os If index is unclustered: T(R) / V(R,a) = 5,000 I/Os For small V(R, a), table scan can be faster than an unclustered index Heuristics that pick indexed over not-indexed can lead you astray Determine the cost of both methods and let the algorithm decide 5
  • 6. Index-based physical plans (p2) Example: Join if S has an index on the join attribute For each tuplein R, fetch corresponding tuple(s) from S Assume R is clustered. Cost: If index on S is clustering: B(R) + T(R) B(S) / V(S,a) If index on S is unclustered: B(R) + T(R) T(S) / V(S,a) Another case: when R is output of another Iterator. Cost: B(R) is accounted for in the iterator If index on S is clustering: T(R) B(S) / V(S,a) If index on S is unclustered: T(R) T(S) / V(S,a) If S is not indexed but fits in memory: B(S) A number of other cases
  • 7. Index-based physical plans (p3) Index Based Join ifboth R and S have a sorted index (B+ tree) on the join attribute Then perform a merge join called zig-zag join Cost: B(R) + B(S)
  • 8. Grand Summary of Physical Plans (p1) Scans and Selects Index: N = None, C = Clustering, NC = Non-clustered
  • 9. Grand Summary of Physical Plans (p2) Joins Index: N = None, C = Clustering, NC = Non-clustered Relation fits in memory: F = Yes, NF = No
  • 10. Physical plans at non-leaf Operators (p1) What if the input of the operator is from another operator? For Select, cost= 0. Cost of pipelining is assumed to be zero The number of tuples emitted is reduced For Join, when R is from an operator and S from a table: B(R) is accounted for in the iterator If index on S is clustering: T(R) B(S) / V(S,a) If index on S is unclustered: T(R) T(S) / V(S,a) If S is not indexed but fits in memory: B(S) If S is not indexed and doesn’t fit: k*B(S) for k chunks If S is not indexed and doesn’t fit: 3*B(S) for sort- or hash-join
  • 11. Physical plans at non-leaf Operators (p2) For Join, when R and S are both from operators, cost depends on whether the result are sorted by the Join attribute(s) If yes, we use the zig-zag algorithm and the cost is zero. Why? If either relation will fit in memory, the cost is zero. Why? At most, the cost is 2*(B(R) + B(S)). Why?
  • 12. Example (787) Product(pname, maker), Company(cname, city) Select Product.pname From Product, Company Where Product.maker=Company.cname and Company.city = “Seattle” How do we execute this query ?
  • 13. Example (787) Product(pname, maker), Company(cname, city) Select Product.pname From Product, Company Where Product.maker=Company.cname and Company.city = “Seattle” Logical Plan Clustering Indices: Product.pname Company.cname Unclustered Indices: Product.maker Company.city maker=cname scity=“Seattle” Product(pname,maker) Company(cname,city)
  • 14. Example (787) Physical Plans Physical Plan 1 Physical Plans 2a and 2b Merge-join Index-basedjoin Index-basedselection maker=cname scity=“Seattle” cname=maker scity=“Seattle” Product(pname,maker) Company(cname,city) Product(pname,maker) Company(cname,city) Index-scan Scan and sort (2a)index scan (2b)
  • 15. Evaluate (787) Physical Plans Physical Plan 1 Tuples: T(city='Seattle'(Company)) = T(Company) / V(Company, City) Cost: T(city='Seattle'(Company)) * T(Product) / V(Product, maker) or, simplifying, T(Company) / V(Company, City) * T(Product) / V(Product, maker) Total Cost: 2a: 3B(Product) + B(Company) 2b: T(Product) + B(Company) Merge-join maker=cname scity=“Seattle” Product(pname,maker) Company(cname,city) Index-scan Scan and sort (2a)index scan (2b)
  • 16. Final Evaluation Plan Costs: Plan 1: T(Company) / V(Company, city)  T(Product)/V(Product, maker) Plan 2a: B(Company) + 3B(Product) Plan 2b: B(Company) + T(Product) Which is better? It depends on the data
  • 17.
  • 18. Plan 2a: B(Company) + 3B(Product)
  • 19.
  • 20. Query Optimzation Have a SQL query Q Create a plan P Find equivalent plans P = P’ = P’’ = … Choose the “cheapest”. HOW ??
  • 21. Logical Query Plan SELECT P.buyer FROM Purchase P, Person Q WHERE P.buyer=Q.name AND Q.city=‘seattle’ AND Q.phone > ‘5430000’ Plan buyer  City=‘seattle’ phone>’5430000’ Buyer=name In class: find a “better” plan P’ Person Purchase
  • 22. CS 542 Database Management Systems Query Optimization – Choosing the Order of Operations J Singh March 28, 2011
  • 23. Outline Convert SQL query to a parse tree Semantic checking: attributes, relation names, types Convert to a logical query plan (relational algebra expression) deal with subqueries Improve the logical query plan use algebraic transformations group together certain operators evaluate logical plan based on estimated size of relations Convert to a physical query plan search the space of physical plans choose order of operations complete the physical query plan
  • 24.
  • 25. R ⋈ (S ⋈ U)
  • 26. (R ⋈ S) ⋈ U
  • 27. S ⋈ (R ⋈ U)
  • 28. But they are not equivalent from an execution viewpoint.Considerable research has gone into picking the best order for Joins
  • 29. Join Trees R1 ⋈R2 ⋈ …⋈Rn Join tree: Definitions A plan = a join tree A partial plan = a subtree of a join tree R3 R1 R2 R4 24
  • 30. Left & Right Join Arguments The argument relations in joins determine the cost of the join In Physical Query Plans, the left argument of the join is Called the build relation Assumed to be smaller Stored in main-memory
  • 31. Left & Right Join Arguments The right argument of the join is Called the probe relation Read a block at a time Its tuples are matched with those of build relation The join algorithms which distinguish between the arguments are: One-pass join Nested-loop join Index join
  • 32.
  • 33. Optimization Algorithms Heuristic based Cost based Dynamic programming: System R Rule-based optimizations: DB2, SQL-Server
  • 34. Dynamic Programming Given: a query R1 ⋈R2 ⋈… ⋈Rn Assume we have a function cost() that gives us the cost of a join tree Find the best join tree for the query
  • 35. Dynamic Programming Problem Statement Given: a query R1 ⋈ R2 ⋈… ⋈Rn Assume we have a function cost() that gives us the cost of a join tree Find the best join tree for the query Idea: for each subset of {R1, …, Rn}, compute the best plan for that subset Algorithm: In increasing order of set cardinality, compute the cost for Step 1: for {R1}, {R2}, …, {Rn} Step 2: for {R1,R2}, {R1,R3}, …, {Rn-1, Rn} … Step n: for {R1, …, Rn} It is a bottom-up strategy Skipping further details of the algorithm Read from book if interested Will not be on the exam
  • 36.
  • 37. Best Plan (R1 ⋈ R3 ⋈ … ⋈ Rn) ⋈ R2
  • 38.
  • 39.
  • 40. Outline Convert SQL query to a parse tree Semantic checking: attributes, relation names, types Convert to a logical query plan (relational algebra expression) deal with subqueries Improve the logical query plan use algebraic transformations group together certain operators evaluate logical plan based on estimated size of relations Convert to a physical query plan search the space of physical plans choose order of operations complete the physical query plan Three topics Choosing the physical implementations (e.g., select and join methods) Decisions regarding materialized vs pipelined Notation for physical query plans
  • 41. Choosing a Selection Method Algorithm for each selection operator 1. Can we use an created index on an attribute? If yes, index-scan. (Otherwise table-scan) 2. After retrieving all condition-satisfied tuples in (1), filter them with the remaining selection conditions In other words, When computing c1  c2  …  cn(R), we index-scan on ci, then filter the result on all other ci, where j  i. The next 2 pages show an example where we examine several options and pick the best one
  • 42. Selection Method Example (p1) Selection: x=1  y=2  z < 5 (R) Where parameters of R are: T(R) = 5,000 B(R) = 200 V(R, x) = 100 V(R, y) = 500 Relation R is clustered x and y have non-clustering indices z is a clustering index
  • 43. Selection Method Example (p2) Selection options: Table-scan  filter x, y, z. Cost isB(R) = 200since R is clustered. Use index on x =1  filter on y, z. Cost is 50 sinceT(R) / V(R, x) is (5000/100) = 50 tuples, x is not clustering. Use index on y =2  filter on x, z. Cost is 10 sinceT(R) / V(R, y) is (5000/500) = 10 tuples, y is not clustering. Index-scan on clustering index w/ z < 5 filter x ,y. Cost is about B(R)/3 = 67 Therefore: First retrieve all tuples with y = 2 (option 3) Then filter for x and z
  • 44. Outline Convert SQL query to a parse tree Semantic checking: attributes, relation names, types Convert to a logical query plan (relational algebra expression) deal with subqueries Improve the logical query plan use algebraic transformations group together certain operators evaluate logical plan based on estimated size of relations Convert to a physical query plan search the space of physical plans choose order of operations complete the physical query plan Three topics Choosing the physical implementations (e.g., select and join methods) Decisions regarding materialized vs pipelined Notation for physical query plans
  • 45. Pipelining Versus Materialization Materialization store (intermediate) result of each operations on disk Pipelining Interleave the execution of several operations, the tuples produced by one operation are passed directly to the operations that used it store (intermediate) result of each operations on buffer, which is implemented on main memory Prefer Pipelining where possible Sometimes not possible, as the following example shows Next few pages, a fully worked-out example
  • 46. R⋈S⋈U Example (p1) Consider physical query plan for the expression (R(w, x) ⋈ S(x, y)) ⋈ U(y, z) Assumption R occupies 5,000 blocks, S and U each 10,000 blocks. The intermediate result R ⋈ S occupies k blocks for some k. Both joins will be implemented as hash-joins, either one-pass or two-pass depending on k There are 101 buffers available.
  • 47. R⋈S⋈U Example (p2) When joining R ⋈ S, neither relation fits in buffers Need two-pass hash-join to partition R How many hash buckets for R? 100 at most The 2nd pass hash-join uses 51 buffers, leaving 50 buffers for joining result of R ⋈ S with U. Why 51?
  • 48. R⋈S⋈U Example (p3) Case 1: Suppose k 49, the result of R ⋈ S occupies at most 49 blocks. Steps Pipeline in R ⋈ S into 49 buffers Organize them for lookup as a hash table Use one buffer left to read each block of U in turn Execute the second join as one-pass join. The total number of I/O’s is 55,000 45,000 for two-pass hash join of R and S 10,000 to read U for one-pass hash join of (R⋈ S) ⋈U.
  • 49. R⋈S⋈U Example (p4) Case 2: suppose k > 49 but < 5,000, we can still pipeline, but need another strategy where intermediate results join with U in a 50-bucket, two-pass hash-join. Steps are: Before start on R ⋈ S, we hash U into 50 buckets of 200 blocks each. Perform two-pass hash join of R and U using 51 buffers as case 1, and placing results in 50 remaining buffers to form 50 buckets for the join of R ⋈ S with U. Finally, join R ⋈ S with U bucket by bucket. The number of disk I/O’s is: 20,000 to read U and write its tuples into buckets 45,000 for two-pass hash-join R ⋈ S k to write out the buckets of R ⋈ S k+10,000 to read the buckets of R ⋈ S and U in the final join The total cost is 75,000+2k.
  • 50. R⋈S⋈U Example (p5) Case 3: k > 5,000, we cannot perform two-pass join in 50 buffers available if result of R ⋈ S is pipelined. We are forced to materialize the relation R ⋈ S. The number of disk I/O’s is: 45,000 for two-pass hash-join R and S k to store R ⋈ S on disk 30,000 + 3k for two-pass join of U in R ⋈ S The total cost is 75,000+4k.
  • 51. R⋈S⋈U Example (p6) In summary, costs of physical plan as function of R ⋈ S size. Pause and Reflect It’s all about the expected size of the intermediate result R ⋈ S What would have happened if We guessed 45 but had 55? Guessed 55 but only had 45? Guessed 4,500 but had 5,500? Guessed 5,500 but only had 4,500?
  • 52. Outline Convert SQL query to a parse tree Semantic checking: attributes, relation names, types Convert to a logical query plan (relational algebra expression) deal with subqueries Improve the logical query plan use algebraic transformations group together certain operators evaluate logical plan based on estimated size of relations Convert to a physical query plan search the space of physical plans choose order of operations complete the physical query plan Three topics Choosing the physical implementations (e.g., select and join methods) Decisions regarding materialized vs pipelined Notation for physical query plans
  • 53. Notation for Physical Query Plans Several types of operators: Operators for leaves (Physical) operators for Selection (Physical) Sorts Operators Other Relational-Algebra Operations In practice, each DBMS uses its own internal notation for physical query plans
  • 54. PQP Notation Leaves:Replace a leaf in an LQP by TableScan(R): Read all blocks SortScan(R, L): Read in order according to L IndexScan(R, C): Scan R using index attribute A by condition AC IndexScan(R, A): Scan R using index attribute A Selects: Replace a Select in an LQP by one of the leaf operators plus: Filter(D) for condition D Sorts: Replace a leaf-level sort as shown above. For other operation, Sort(L): Sort a relation that is not stored Other Operators: Operation- and algorithm-specific (e.g., Hash-Join) Also need to specify # passes, buffer sizes, etc.
  • 55. We have Arrived at the Desired Endpoint  x=1 AND y=2 AND z<5 (R) R ⋈ S ⋈ U Example Physical Query Plans two-pass hash-join 101 buffers Filter(x=1 AND z<5) materialize IndexScan(R,y=2) two-pass hash-join 101 buffers TableScan(U) TableScan(R) TableScan(S)
  • 56. Outline Convert SQL query to a parse tree Semantic checking: attributes, relation names, types Convert to a logical query plan (relational algebra expression) deal with subqueries Improve the logical query plan use algebraic transformations group together certain operators evaluate logical plan based on estimated size of relations Convert to a physical query plan search the space of physical plans choose order of operations complete the physical query plan
  • 57. Optimization Issues and Proposals The “fuzz” in estimation of sizes Parametric Query Optimization Specify alternatives to the execution engine so it may respond to conditions at runtime Multiple-query optimization Take concurrent execution of several queries into account Combinatoric explosion of options when doing an n-way Join Becomes really expensive around n > 15 Alternatives optimizations have been proposed for special situations, but no general framework Rule-based optimizers Randomized plan generation
  • 58. CS 542 Database Management Systems Distributed Query Execution Source: Carsten Binnig, Univ of Zurich, 2006 J Singh March 28, 2011
  • 59. Motivation Algorithms based on Semi-Joins have been proposed as techniques for query optimization They shine in Distributed and Parallel Databases Good opportunity to explore them in that context Semi-join by example: Semi-join formal definition:
  • 60. Distributed / Parallel Join Processing Scenario: How to compute A ⋈B? Table A resides on Node 1 Table B resides on Node 2 Node 1 Node 2 Table A Table B
  • 61. Naïve approach (1) Idea: Use standard join and fetch table page-wise from remote node if necessary (send- and receive-operators) Example: Join is executed on node 2 using a Nested-Loop-Join Outer loop: Request page of table A from node 1 (remote) Inner loop: For each page iterate over table B and produce output => Random access of pages on node 1 (due to network delay) Node 1 Node 2 Request Table A Page A1 Table B Send
  • 62.
  • 63. Naïve Approach: Implications Problems: High cost for shipping data Network cost roughly the same as I/O cost for a hard disk (or even worse because of unpredictability of network delay) Shipping A roughly equivalent to a full table scan (Trivial) Optimizations: Ship always smaller table to the other side If query contains a selection, apply selection before sending A Note: bigger table may become the smaller table (after selection)
  • 64. Semi-join Approach (p1) Idea: Before shipping a table, reduce to data that is shipped to those tuples that are only relevant for join Example: Join on A.id=B.id and table A should be shipped to node 2 Node 1 Node 2 Table A Table B
  • 65. Semi-join Approach (p2) (1) Compute projection B.id of table B on node 2 (2) Ship column B.id to node 1 Node 1 Node 2 Table A Table B Ship
  • 66. Semi-join Approach (p3) (3) Execute semi-join of B.id and table A on A.id=B.id (to select only relevant tuples of table A => table A’) (4) Send result of semi-join (table A’) to node 2 Node 1 Node 2 Table A Table B Table A’ Ship
  • 67. Semi-join Approach (p4) (5) Join the shipped table A’ locally on node 2 with table B => Optimization of this approach: If node 1 holds a join index (e.g., type 1 with A.id -> {B.RID}) we can start with step (3) Node 1 Node 2 Table A Table B Table A’ Ship
  • 68. Semi-join Approach Discussion This strategy works well if semi-join reduces size of the table that needs to be shipped Assume all rows of Table A are needed anyway => none of the rows of table A can be discarded Then this approach is more costly than shipping the entire table A in the first place! Consequence: Need to decide whether this method makes sense based on semi-join selectivity => Cost-based optimization must decide this
  • 69. Bloom-join Approach (p1) Algorithm same as semi-join approach Ship a bloom-filter instead of (foreign) key column Use bloom-filter technique to compress data Goal: only send a small bit list (to reduce network I/O) instead of all keys of column (as bit-vector) Problems: A superset of tuples that might join will be sent back (same problem as in bloom-filters for bitmap-indexes) => More tuples must be sent over network and thus net gain depends on good hash function
  • 70. Bloom-join Approach (p2) (1) Compute bloom filter BL of size n for column B.id of table B on node 2 with n << |B.id| (e.g., by B.id % n) (2) Ship bloom filter B.id’ to node 1 Node 1 Node 2 Table A Table B Ship
  • 71. Bloom-join Approach (p3) (3) Probe bloom filter B.id’ with tuples from table A to get a superset of possible join candidates (=> table A’) (4) Send result (table A’) to node 2 (table A’ might contain join candidates that do not have a partner in table B) (5) Join the shipped table A’ locally on node 2 with table B Node 1 Node 2 Table A Table B Table A’ Ship Probe
  • 72. Bloom-join Approach Discussion Communication cost much reduced But have to deal with false positives Widely used in NoSQLdatabases