SlideShare una empresa de Scribd logo
1 de 39
Distributed DBMS ©M. T. Özsu & P. Valduriez Ch.13/1
Outline
• Introduction
• Background
• Distributed Database Design
• Database Integration
• Semantic Data Control
• Distributed Query Processing
• Distributed Transaction Management
• Data Replication
➡ Consistency criteria
➡ Replication protocols
➡ Replication and failure management
• Parallel Database Systems
• Distributed Object DBMS
• Peer-to-Peer Data Management
• Web Data Management
• Current Issues
Distributed DBMS ©M. T. Özsu & P. Valduriez Ch.13/2
Replication
• Why replicate?
➡ System availability
✦ Avoid single points of failure
➡ Performance
✦ Localization
➡ Scalability
✦ Scalability in numbers and geographic area
➡ Application requirements
• Why not replicate?
➡ Replication transparency
➡ Consistency issues
✦ Updates are costly
✦ Availability may suffer if not careful
Distributed DBMS ©M. T. Özsu & P. Valduriez Ch.13/3
Execution Model
• There are physical copies of logical objects in the system.
• Operations are specified on logical objects, but translated to operate on
physical objects.
• One-copy equivalence
➡ The effect of transactions performed by clients on replicated objects should be
the same as if they had been performed on a single set of objects.
x
x1 x2 xn
…
Physical data item (replicas, copies)
Logical data item
Write(x)
Write(x1) Write(x2) Write(xn)
Distributed DBMS ©M. T. Özsu & P. Valduriez Ch.13/4
Replication Issues
• Consistency models - how do we reason about the consistency of the
“global execution state”?
➡ Mutual consistency
➡ Transactional consistency
• Where are updates allowed?
➡ Centralized
➡ Distributed
• Update propagation techniques – how do we propagate updates to one
copy to the other copies?
➡ Eager
➡ Lazy
Distributed DBMS ©M. T. Özsu & P. Valduriez Ch.13/5
Consistency
• Mutual Consistency
➡ How do we keep the values of physical copies of a logical data item
synchronized?
➡ Strong consistency
✦ All copies are updated within the context of the update transaction
✦ When the update transaction completes, all copies have the same value
✦ Typically achieved through 2PC
➡ Weak consistency
✦ Eventual consistency: the copies are not identical when update transaction
completes, but they eventually converge to the same value
✦ Many versions possible:
✓ Time-bounds
✓ Value-bounds
✓ Drifts
Distributed DBMS ©M. T. Özsu & P. Valduriez Ch.13/6
Transactional Consistency
• How can we guarantee that the global execution history over replicated
data is serializable?
• One-copy serializability (1SR)
➡ The effect of transactions performed by clients on replicated objects should be
the same as if they had been performed one at-a-time on a single set of objects.
• Weaker forms are possible
➡ Snapshot isolation
➡ RC-serializability
Distributed DBMS ©M. T. Özsu & P. Valduriez Ch.13/7
Example 1
Site A Site B Site C
x x, y x, y, z
T1: x ← 20 T2: Read(x) T3: Read(x)
Write(x) x ← x+y Read(y)
Commit Write(y) z ← (x∗y)/100
Commit Write(z)
Commit
Consider the three histories:
HA={W1(xA), C1}
HB={W1(xB), C1, R2(xB), W2(yB), C2}
HC={W2(yC), C2, R3(xC), R3(yC),W3(zC), C3, W1(xC),C1}
Global history non-serializable: HB: T1→T2, HC: T2→T3→T1
Mutually consistent: Assume xA=xB=xC=10, yB=yC=15,yC=7 to begin; in the
end xA=xB=xC=20, yB=yC=35,yC=3.5
Distributed DBMS ©M. T. Özsu & P. Valduriez Ch.13/8
Example 2
Site A Site B
x x
T1: Read(x) T2: Read(x)
x ← x+5 x ← x∗10
Write(x) Write(x)
Commit Commit
Consider the two histories:
HA={R1(xA),W1(xA), C1, W2(xA), C2}
HB={R1(xB), W2(xB), C2, W1(xB), C1}
Global history non-serializable: HA: T1→ T2, HB: T2→ T1
Mutually inconsistent: Assume xA=xB=1 to begin; in the end xA=10,
xB=6
Distributed DBMS ©M. T. Özsu & P. Valduriez Ch.13/9
Update Management Strategies
• Depending on when the updates are propagated
➡ Eager
➡ Lazy
• Depending on where the updates can take place
➡ Centralized
➡ Distributed
Eager
Lazy
Centralized Distributed
Distributed DBMS ©M. T. Özsu & P. Valduriez Ch.13/10
Eager Replication
• Changes are propagated within the scope of the transaction making the
changes. The ACID properties apply to all copy updates.
➡ Synchronous
➡ Deferred
• ROWA protocol: Read-one/Write-all
Site 1 Site 2 Site 3 Site 4
Transaction
updates commit



Distributed DBMS ©M. T. Özsu & P. Valduriez Ch.13/11
Lazy Replication
●Lazy replication first executes the updating transaction on one copy. After
the transaction commits, the changes are propagated to all other copies
(refresh transactions)
●While the propagation takes place, the copies are mutually inconsistent.
●The time the copies are mutually inconsistent is an adjustable parameter
which is application dependent.
Site 1 Site 2 Site 3 Site 4
Transaction
updates commit
 

Distributed DBMS ©M. T. Özsu & P. Valduriez Ch.13/12
Centralized
●There is only one copy which can be updated (the master), all others (slave
copies) are updated reflecting the changes to the master.
Site 1 Site 2 Site 3 Site 4
Site 1 Site 2 Site 3 Site 4
Distributed DBMS ©M. T. Özsu & P. Valduriez Ch.13/13
Distributed
●Changes can be initiated at any of the copies. That is, any of the sites which
owns a copy can update the value of the data item.
Site 1 Site 2 Site 3 Site 4
Transaction
updates commit
Site 1 Site 2 Site 3 Site 4
Transaction
updates commit
Distributed DBMS ©M. T. Özsu & P. Valduriez Ch.13/14
Forms of Replication
Eager
+ No inconsistencies (identical copies)
+ Reading the local copy yields the most
up to date value
+ Changes are atomic
− A transaction has to update all sites
− Longer execution time
− Lower availability
Lazy
+ A transaction is always local (good
response time)
− Data inconsistencies
− A local read does not always return the
most up-to-date value
− Changes to all copies are not
guaranteed
− Replication is not transparent
Centralized
+ No inter-site synchronization is
necessary (it takes place at the
master)
+ There is always one site which has
all the updates
− The load at the master can be high
− Reading the local copy may not
yield the most up-to-date value
Distributed
+ Any site can run a transaction
+ Load is evenly distributed
− Copies need to be synchronized
Distributed DBMS ©M. T. Özsu & P. Valduriez Ch.13/15
Replication Protocols
Eager
Lazy
Centralized Distributed
Eager centralized Eager distributed
Lazy distributedLazy centralized
The previous ideas can be combined into 4 different replication protocols:
Distributed DBMS ©M. T. Özsu & P. Valduriez Ch.13/16
Eager Centralized Protocols
• Design parameters:
➡ Distribution of master
✦ Single master: one master for all data items
✦ Primary copy: different masters for different (sets of) data items
➡ Level of transparency
✦ Limited: applications and users need to know who the master is
✓ Update transactions are submitted directly to the master
✓ Reads can occur on slaves
✦ Full: applications and users can submit anywhere and the operations will be
forwarded to the master
✓ Operation-based forwarding
• Four alternative implementation architectures, only three are meaningful:
➡ Single master, limited transparency
➡ Single master, full transparency
➡ Primary copy, full transparency
Distributed DBMS ©M. T. Özsu & P. Valduriez Ch.13/17
Eager Single Master/Limited
Transparency
• Applications submit update transactions directly to the master
• Master:
➡ Upon read: read locally and return to user
➡ Upon write: write locally, multicast write to other replicas (in FFO
timestamps order)
➡ Upon commit request: run 2PC coordinator to ensure that all have really
installed the changes
➡ Upon abort: abort and inform other sites about abort
• Slaves install writes that arrive from the master
Distributed DBMS ©M. T. Özsu & P. Valduriez Ch.13/18
Eager Single Master/Limited
Transparency (cont’d)
• Applications submit read transactions directly to an appropriate slave
• Slave
➡ Upon read: read locally
➡ Upon write from master copy: execute conflicting writes in the proper order
(FIFO or timestamp)
➡ Upon write from client: refuse (abort transaction; there is error)
➡ Upon commit request from read-only: commit locally
➡ Participant of 2PC for update transaction running on primary
Distributed DBMS ©M. T. Özsu & P. Valduriez Ch.13/19
Eager Single Master/
Full Transparency
Coordinating TM
1. Send op(x) to the master site
2. Send Read(x) to any site that has x
3. Send Write(x) to all the slaves
where a copy of x exists
4. When Commit arrives, act as
coordinator for 2PC
Master Site
1. If op(x) = Read(x): set read lock on
x and send “lock granted” msg to
the coordinating TM
2. If op(x) = Write(x)
1. Set write lock on x
2. Update local copy of x
3. Inform coordinating TM
3. Act as participant in 2PC
Applications submit all transactions to the Transaction Manager at their
own sites (Coordinating TM)
Distributed DBMS ©M. T. Özsu & P. Valduriez Ch.13/20
Eager Primary Copy/Full
Transparency
• Applications submit transactions directly to their local TMs
• Local TM:
➡ Forward each operation to the primary copy of the data item
➡ Upon granting of locks, submit Read to any slave, Write to all slaves
➡ Coordinate 2PC
Distributed DBMS ©M. T. Özsu & P. Valduriez Ch.13/21
Eager Primary Copy/Full
Transparency (cont’d)
• Primary copy site
➡ Read(x): lock xand reply to TM
➡ Write(x): lock x, perform update, inform TM
➡ Participate in 2PC
• Slaves: as before
Distributed DBMS ©M. T. Özsu & P. Valduriez Ch.13/22
Eager Distributed Protocol
• Updates originate at any copy
➡ Each sites uses 2 phase locking.
➡ Read operations are performed locally.
➡ Write operations are performed at all sites (using a distributed locking
protocol).
➡ Coordinate 2PC
• Slaves:
➡ As before
Distributed DBMS ©M. T. Özsu & P. Valduriez Ch.13/23
Eager Distributed Protocol
(cont’d)
• Critical issue:
➡ Concurrent Writes initiated at different master sites are executed in the same
order at each slave site
➡ Local histories are serializable (this is easy)
• Advantages
➡ Simple and easy to implement
• Disadvantage
➡ Very high communication overhead
✦ n replicas; m update operations in each transaction: n*m messages (assume no
multicasting)
✦ For throughput of k tps: k* n*m messages
• Alternative
➡ Use group communication + deferred update to slaves to reduce messages
Distributed DBMS ©M. T. Özsu & P. Valduriez Ch.13/24
Lazy Single Master/Limited
Transparency
• Update transactions submitted to master
• Master:
➡ Upon read: read locally and return to user
➡ Upon write: write locally and return to user
➡ Upon commit/abort: terminate locally
➡ Sometime after commit: multicast updates to slaves (in order)
• Slaves:
➡ Upon read: read locally
➡ Refresh transactions: install updates
Distributed DBMS ©M. T. Özsu & P. Valduriez Ch.13/25
Lazy Primary Copy/Limited
Transparency
• There are multiple masters; each master execution is similar to lazy single
master in the way it handles transactions
• Slave execution complicated: refresh transactions from multiple masters
and need to be ordered properly
Distributed DBMS ©M. T. Özsu & P. Valduriez Ch.13/26
Lazy Primary Copy/Limited
Transparency – Slaves
• Assign system-wide unique timestamps to refresh transactions and execute
them in timestamp order
➡ May cause too many aborts
• Replication graph
➡ Similar to serialization graph, but nodes are transactions (T) + sites (S); edge
〈Ti,Sj〉exists iff Ti performs a Write(x) and x is stored in Sj
➡ For each operation (opk), enter the appropriate nodes (Tk) and edges; if graph
has no cycles, no problem
➡ If cycle exists and the transactions in the cycle have been committed at their
masters, but their refresh transactions have not yet committed at slaves, abort
Tk; if they have not yet committed at their masters, Tkwaits.
• Use group communication
Distributed DBMS ©M. T. Özsu & P. Valduriez Ch.13/27
Lazy Single Master/Full
Transparency
• This is very tricky
➡ Forwarding operations to a master and then getting refresh transactions
cause difficulties
• Two problems:
➡ Violation of 1SR behavior
➡ A transaction may not see its own reads
• Problem arises in primary copy/full transparency as well
Distributed DBMS ©M. T. Özsu & P. Valduriez Ch.13/28
Example 3
Site M (Master) holds x, y; SiteB holds slave copies of x, y
T1: Read(x), Write(y), Commit
T2: Read(x), Write(y), Commit
Distributed DBMS ©M. T. Özsu & P. Valduriez Ch.13/29
Example 4
• Master site M holds x, site C holds slave copy of x
• T3: Write(x), Read(x), Commit
• Sequence of execution
1. W3(x) submitted at C, forwarded to M for execution
2. W3(x) is executed at M, confirmation sent back to C
3. R3(x) submitted at C and executed on the local copy
4. T3 submits Commit at C, forwarded to M for execution
5. M executes Commit, sends notification to C, which also commits T3
6. M sends refresh transaction for T3 to C (for W3(x) operation)
7. C executes the refresh transaction and commits it
• When C reads x at step 3, it does not see the effects of Write at step 2
Distributed DBMS ©M. T. Özsu & P. Valduriez Ch.13/30
Lazy Single Master/
Full Transparency - Solution
• Assume T = Write(x)
• At commit time of transaction T, the master generates a timestamp for it
[ts(T)]
• Master sets last_modified(xM) ← ts(T)
• When a refresh transaction arrives at a slave site i, it also sets
last_modified(xi) ← last_modified(xM)
• Timestamp generation rule at the master:
➡ ts(T) should be greater than all previously issued timestamps and should
be less than the last_modified timestamps of the data items it has accessed. If
such a timestamp cannot be generated, then T is aborted.
Distributed DBMS ©M. T. Özsu & P. Valduriez Ch.13/31
Lazy Distributed Replication
• Any site:
➡ Upon read: read locally and return to user
➡ Upon write: write locally and return to user
➡ Upon commit/abort: terminate locally
➡ Sometime after commit: send refresh transaction
➡ Upon message from other site
✦ Detect conflicts
✦ Install changes
✦ Reconciliation may be necessary
Distributed DBMS ©M. T. Özsu & P. Valduriez Ch.13/32
Reconciliation
• Such problems can be solved using pre-arranged patterns:
➡ Latest update win (newer updates preferred over old ones)
➡ Site priority (preference to updates from headquarters)
➡ Largest value (the larger transaction is preferred)
• Or using ad-hoc decision making procedures:
➡ Identify the changes and try to combine them
➡ Analyze the transactions and eliminate the non-important ones
➡ Implement your own priority schemas
Distributed DBMS ©M. T. Özsu & P. Valduriez Ch.13/33
Replication Strategies
Centralized Distributed
+Updates do not need to be
coordinated
+No inconsistencies
- Longest response time
- Only useful with few updates
- Local copies are can only be
read
+No inconsistencies
+Elegant (symmetrical solution)
- Long response times
- Updates need to be
coordinated
+No coordination necessary
+Short response times
- Local copies are not up to date
- Inconsistencies
+No centralized coordination
+Shortest response times
- Inconsistencies
- Updates can be lost
(reconciliation)
Distributed DBMS ©M. T. Özsu & P. Valduriez Ch.13/34
Group Communication
• A node can multicast a message to all nodes of a group with a delivery
guarantee
• Multicast primitives
➡ There are a number of them
➡ Total ordered multicast: all messages sent by different nodes are delivered in
the same total order at all the nodes
• Used with deferred writes, can reduce communication overhead
➡ Remember eager distributed requires k*m messages (with multicast) for
throughput of ktps when there are n replicas and m update operations in each
transaction
➡ With group communication and deferred writes: 2k messages
Distributed DBMS ©M. T. Özsu & P. Valduriez Ch.13/35
Failures
• So far we have considered replication protocols in the absence of failures
• How to keep replica consistency when failures occur
➡ Site failures
✦ Read One Write All Available (ROWAA)
➡ Communication failures
✦ Quorums
➡ Network partitioning
✦ Quorums
Distributed DBMS ©M. T. Özsu & P. Valduriez Ch.13/36
ROWAA with Primary Site
• READ = read any copy, if time-out, read another copy.
• WRITE = send W(x) to all copies. If one site rejects the operation, then
abort. Otherwise, all sites not responding are “missing writes”.
• VALIDATION = To commit a transaction
➡ Check that all sites in “missing writes” are still down. If not, then abort the
transaction.
✦ There might be a site recovering concurrent with transaction updates and these
may be lost
➡ Check that all sites that were available are still available. If some do not
respond, then abort.
Distributed DBMS ©M. T. Özsu & P. Valduriez Ch.13/37
Distributed ROWAA
• Each site has a copy of V
➡ V represents the set of sites a site believes is available
➡ V(A) is the “view” a site has of the system configuration.
• The view of a transaction T [V(T)] is the view of its coordinating site, when the
transaction starts.
➡ Read any copy within V; update all copies in V
➡ If at the end of the transaction the view has changed, the transaction is aborted
• All sites must have the same view!
• To modify V, run a special atomic transaction at all sites.
➡ Take care that there are no concurrent views!
➡ Similar to commit protocol.
➡ Idea: Vs have version numbers; only accept new view if its version number is higher
than your current one
• Recovery: get missed updates from any active node
➡ Problem: no unique sequence of transactions
Distributed DBMS ©M. T. Özsu & P. Valduriez Ch.13/38
Quorum-Based Protocol
• Assign a vote to each copy of a replicated object (say Vi) such that ∑iVi = V
• Each operation has to obtain a read quorum (Vr) to read and a write
quorum (Vw) to write an object
• Then the following rules have to be obeyed in determining the quorums:
➡ Vr+ Vw>V an object is not read and written by two transactions
concurrently
➡ Vw>V/2 two write operations from two transactions cannot occur
concurrently on the same object
Distributed DBMS ©M. T. Özsu & P. Valduriez Ch.13/39
Quorum Example
Three examples of the voting algorithm:
a) A correct choice of read and write set
b) A choice that may lead to write-write conflicts
c) ROWA
From Tanenbaum and van Steen, Distributed Systems: Principles and Paradigms
© Prentice-Hall, Inc. 2002

Más contenido relacionado

La actualidad más candente

La actualidad más candente (20)

Chapter 7 - Deadlocks
Chapter 7 - DeadlocksChapter 7 - Deadlocks
Chapter 7 - Deadlocks
 
Cache coherence
Cache coherenceCache coherence
Cache coherence
 
Address Binding Scheme
Address Binding SchemeAddress Binding Scheme
Address Binding Scheme
 
8 memory management strategies
8 memory management strategies8 memory management strategies
8 memory management strategies
 
Memory management
Memory managementMemory management
Memory management
 
Replication in Distributed Systems
Replication in Distributed SystemsReplication in Distributed Systems
Replication in Distributed Systems
 
Memory management
Memory managementMemory management
Memory management
 
data replication
data replicationdata replication
data replication
 
Run time storage
Run time storageRun time storage
Run time storage
 
Distributed DBMS - Unit 9 - Distributed Deadlock & Recovery
Distributed DBMS - Unit 9 - Distributed Deadlock & RecoveryDistributed DBMS - Unit 9 - Distributed Deadlock & Recovery
Distributed DBMS - Unit 9 - Distributed Deadlock & Recovery
 
Chapter 8 - Main Memory
Chapter 8 - Main MemoryChapter 8 - Main Memory
Chapter 8 - Main Memory
 
Deadlock in Distributed Systems
Deadlock in Distributed SystemsDeadlock in Distributed Systems
Deadlock in Distributed Systems
 
Distributed Transaction
Distributed TransactionDistributed Transaction
Distributed Transaction
 
Memory management
Memory managementMemory management
Memory management
 
contiguous memory allocation.pptx
contiguous memory allocation.pptxcontiguous memory allocation.pptx
contiguous memory allocation.pptx
 
Chapter 10 - File System Interface
Chapter 10 - File System InterfaceChapter 10 - File System Interface
Chapter 10 - File System Interface
 
Parallel processing
Parallel processingParallel processing
Parallel processing
 
Distributed DBMS - Unit 6 - Query Processing
Distributed DBMS - Unit 6 - Query ProcessingDistributed DBMS - Unit 6 - Query Processing
Distributed DBMS - Unit 6 - Query Processing
 
Swapping | Computer Science
Swapping | Computer ScienceSwapping | Computer Science
Swapping | Computer Science
 
Chapter 14 replication
Chapter 14 replicationChapter 14 replication
Chapter 14 replication
 

Similar a Database , 13 Replication

Database ,10 Transactions
Database ,10 TransactionsDatabase ,10 Transactions
Database ,10 TransactionsAli Usman
 
Database , 12 Reliability
Database , 12 ReliabilityDatabase , 12 Reliability
Database , 12 ReliabilityAli Usman
 
1 introduction
1 introduction1 introduction
1 introductionAmrit Kaur
 
Solving Hadoop Replication Challenges with an Active-Active Paxos Algorithm
Solving Hadoop Replication Challenges with an Active-Active Paxos AlgorithmSolving Hadoop Replication Challenges with an Active-Active Paxos Algorithm
Solving Hadoop Replication Challenges with an Active-Active Paxos AlgorithmDataWorks Summit
 
Database , 1 Introduction
 Database , 1 Introduction Database , 1 Introduction
Database , 1 IntroductionAli Usman
 
1 introduction DDBS
1 introduction DDBS1 introduction DDBS
1 introduction DDBSnaimanighat
 
Database , 6 Query Introduction
Database , 6 Query Introduction Database , 6 Query Introduction
Database , 6 Query Introduction Ali Usman
 
Dsm (Distributed computing)
Dsm (Distributed computing)Dsm (Distributed computing)
Dsm (Distributed computing)Sri Prasanna
 
Database ,14 Parallel DBMS
Database ,14 Parallel DBMSDatabase ,14 Parallel DBMS
Database ,14 Parallel DBMSAli Usman
 
Database ,18 Current Issues
Database ,18 Current IssuesDatabase ,18 Current Issues
Database ,18 Current IssuesAli Usman
 
Introduction 1
Introduction 1Introduction 1
Introduction 1Yasir Khan
 
Lecture20-21-22.ppt
Lecture20-21-22.pptLecture20-21-22.ppt
Lecture20-21-22.pptssuserf67e3a
 
Storing the real world data
Storing the real world dataStoring the real world data
Storing the real world dataAthira Mukundan
 
Scaling systems using change propagation across data stores
Scaling systems using change propagation across data storesScaling systems using change propagation across data stores
Scaling systems using change propagation across data storesJagadeesh Huliyar
 
Selective Data Replication with Geographically Distributed Hadoop
Selective Data Replication with Geographically Distributed HadoopSelective Data Replication with Geographically Distributed Hadoop
Selective Data Replication with Geographically Distributed HadoopDataWorks Summit
 
1 introduction ddbms
1 introduction ddbms1 introduction ddbms
1 introduction ddbmsamna izzat
 

Similar a Database , 13 Replication (20)

Database ,10 Transactions
Database ,10 TransactionsDatabase ,10 Transactions
Database ,10 Transactions
 
Database , 12 Reliability
Database , 12 ReliabilityDatabase , 12 Reliability
Database , 12 Reliability
 
1 introduction
1 introduction1 introduction
1 introduction
 
Solving Hadoop Replication Challenges with an Active-Active Paxos Algorithm
Solving Hadoop Replication Challenges with an Active-Active Paxos AlgorithmSolving Hadoop Replication Challenges with an Active-Active Paxos Algorithm
Solving Hadoop Replication Challenges with an Active-Active Paxos Algorithm
 
Database , 1 Introduction
 Database , 1 Introduction Database , 1 Introduction
Database , 1 Introduction
 
1 introduction DDBS
1 introduction DDBS1 introduction DDBS
1 introduction DDBS
 
Database , 6 Query Introduction
Database , 6 Query Introduction Database , 6 Query Introduction
Database , 6 Query Introduction
 
Dsm (Distributed computing)
Dsm (Distributed computing)Dsm (Distributed computing)
Dsm (Distributed computing)
 
Database ,14 Parallel DBMS
Database ,14 Parallel DBMSDatabase ,14 Parallel DBMS
Database ,14 Parallel DBMS
 
Database ,18 Current Issues
Database ,18 Current IssuesDatabase ,18 Current Issues
Database ,18 Current Issues
 
Introduction 1
Introduction 1Introduction 1
Introduction 1
 
A
AA
A
 
6-Query_Intro (5).pdf
6-Query_Intro (5).pdf6-Query_Intro (5).pdf
6-Query_Intro (5).pdf
 
Lecture20-21-22.ppt
Lecture20-21-22.pptLecture20-21-22.ppt
Lecture20-21-22.ppt
 
Storing the real world data
Storing the real world dataStoring the real world data
Storing the real world data
 
nnnn.pptx
nnnn.pptxnnnn.pptx
nnnn.pptx
 
DBMS.pptx
DBMS.pptxDBMS.pptx
DBMS.pptx
 
Scaling systems using change propagation across data stores
Scaling systems using change propagation across data storesScaling systems using change propagation across data stores
Scaling systems using change propagation across data stores
 
Selective Data Replication with Geographically Distributed Hadoop
Selective Data Replication with Geographically Distributed HadoopSelective Data Replication with Geographically Distributed Hadoop
Selective Data Replication with Geographically Distributed Hadoop
 
1 introduction ddbms
1 introduction ddbms1 introduction ddbms
1 introduction ddbms
 

Más de Ali Usman

Cisco Packet Tracer Overview
Cisco Packet Tracer OverviewCisco Packet Tracer Overview
Cisco Packet Tracer OverviewAli Usman
 
Islamic Arts and Architecture
Islamic Arts and  ArchitectureIslamic Arts and  Architecture
Islamic Arts and ArchitectureAli Usman
 
Database , 17 Web
Database , 17 WebDatabase , 17 Web
Database , 17 WebAli Usman
 
Database ,16 P2P
Database ,16 P2P Database ,16 P2P
Database ,16 P2P Ali Usman
 
Database , 15 Object DBMS
Database , 15 Object DBMSDatabase , 15 Object DBMS
Database , 15 Object DBMSAli Usman
 
Database ,11 Concurrency Control
Database ,11 Concurrency ControlDatabase ,11 Concurrency Control
Database ,11 Concurrency ControlAli Usman
 
Database ,7 query localization
Database ,7 query localizationDatabase ,7 query localization
Database ,7 query localizationAli Usman
 
Database , 5 Semantic
Database , 5 SemanticDatabase , 5 Semantic
Database , 5 SemanticAli Usman
 
Database , 4 Data Integration
Database , 4 Data IntegrationDatabase , 4 Data Integration
Database , 4 Data IntegrationAli Usman
 
Database, 3 Distribution Design
Database, 3 Distribution DesignDatabase, 3 Distribution Design
Database, 3 Distribution DesignAli Usman
 
Database ,2 Background
 Database ,2 Background Database ,2 Background
Database ,2 BackgroundAli Usman
 
Processor Specifications
Processor SpecificationsProcessor Specifications
Processor SpecificationsAli Usman
 
Fifty Year Of Microprocessor
Fifty Year Of MicroprocessorFifty Year Of Microprocessor
Fifty Year Of MicroprocessorAli Usman
 
Discrete Structures lecture 2
 Discrete Structures lecture 2 Discrete Structures lecture 2
Discrete Structures lecture 2Ali Usman
 
Discrete Structures. Lecture 1
 Discrete Structures. Lecture 1  Discrete Structures. Lecture 1
Discrete Structures. Lecture 1 Ali Usman
 
Muslim Contributions in Medicine-Geography-Astronomy
Muslim Contributions in Medicine-Geography-AstronomyMuslim Contributions in Medicine-Geography-Astronomy
Muslim Contributions in Medicine-Geography-AstronomyAli Usman
 
Muslim Contributions in Geography
Muslim Contributions in GeographyMuslim Contributions in Geography
Muslim Contributions in GeographyAli Usman
 
Muslim Contributions in Astronomy
Muslim Contributions in AstronomyMuslim Contributions in Astronomy
Muslim Contributions in AstronomyAli Usman
 
Processor Specifications
Processor SpecificationsProcessor Specifications
Processor SpecificationsAli Usman
 
Ptcl modem (user manual)
Ptcl modem (user manual)Ptcl modem (user manual)
Ptcl modem (user manual)Ali Usman
 

Más de Ali Usman (20)

Cisco Packet Tracer Overview
Cisco Packet Tracer OverviewCisco Packet Tracer Overview
Cisco Packet Tracer Overview
 
Islamic Arts and Architecture
Islamic Arts and  ArchitectureIslamic Arts and  Architecture
Islamic Arts and Architecture
 
Database , 17 Web
Database , 17 WebDatabase , 17 Web
Database , 17 Web
 
Database ,16 P2P
Database ,16 P2P Database ,16 P2P
Database ,16 P2P
 
Database , 15 Object DBMS
Database , 15 Object DBMSDatabase , 15 Object DBMS
Database , 15 Object DBMS
 
Database ,11 Concurrency Control
Database ,11 Concurrency ControlDatabase ,11 Concurrency Control
Database ,11 Concurrency Control
 
Database ,7 query localization
Database ,7 query localizationDatabase ,7 query localization
Database ,7 query localization
 
Database , 5 Semantic
Database , 5 SemanticDatabase , 5 Semantic
Database , 5 Semantic
 
Database , 4 Data Integration
Database , 4 Data IntegrationDatabase , 4 Data Integration
Database , 4 Data Integration
 
Database, 3 Distribution Design
Database, 3 Distribution DesignDatabase, 3 Distribution Design
Database, 3 Distribution Design
 
Database ,2 Background
 Database ,2 Background Database ,2 Background
Database ,2 Background
 
Processor Specifications
Processor SpecificationsProcessor Specifications
Processor Specifications
 
Fifty Year Of Microprocessor
Fifty Year Of MicroprocessorFifty Year Of Microprocessor
Fifty Year Of Microprocessor
 
Discrete Structures lecture 2
 Discrete Structures lecture 2 Discrete Structures lecture 2
Discrete Structures lecture 2
 
Discrete Structures. Lecture 1
 Discrete Structures. Lecture 1  Discrete Structures. Lecture 1
Discrete Structures. Lecture 1
 
Muslim Contributions in Medicine-Geography-Astronomy
Muslim Contributions in Medicine-Geography-AstronomyMuslim Contributions in Medicine-Geography-Astronomy
Muslim Contributions in Medicine-Geography-Astronomy
 
Muslim Contributions in Geography
Muslim Contributions in GeographyMuslim Contributions in Geography
Muslim Contributions in Geography
 
Muslim Contributions in Astronomy
Muslim Contributions in AstronomyMuslim Contributions in Astronomy
Muslim Contributions in Astronomy
 
Processor Specifications
Processor SpecificationsProcessor Specifications
Processor Specifications
 
Ptcl modem (user manual)
Ptcl modem (user manual)Ptcl modem (user manual)
Ptcl modem (user manual)
 

Último

Top 5 Benefits OF Using Muvi Live Paywall For Live Streams
Top 5 Benefits OF Using Muvi Live Paywall For Live StreamsTop 5 Benefits OF Using Muvi Live Paywall For Live Streams
Top 5 Benefits OF Using Muvi Live Paywall For Live StreamsRoshan Dwivedi
 
Presentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreterPresentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreternaman860154
 
Developing An App To Navigate The Roads of Brazil
Developing An App To Navigate The Roads of BrazilDeveloping An App To Navigate The Roads of Brazil
Developing An App To Navigate The Roads of BrazilV3cube
 
Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...
Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...
Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...gurkirankumar98700
 
The Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxThe Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxMalak Abu Hammad
 
Factors to Consider When Choosing Accounts Payable Services Providers.pptx
Factors to Consider When Choosing Accounts Payable Services Providers.pptxFactors to Consider When Choosing Accounts Payable Services Providers.pptx
Factors to Consider When Choosing Accounts Payable Services Providers.pptxKatpro Technologies
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024The Digital Insurer
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Scriptwesley chun
 
Slack Application Development 101 Slides
Slack Application Development 101 SlidesSlack Application Development 101 Slides
Slack Application Development 101 Slidespraypatel2
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationSafe Software
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processorsdebabhi2
 
Handwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsHandwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsMaria Levchenko
 
How to convert PDF to text with Nanonets
How to convert PDF to text with NanonetsHow to convert PDF to text with Nanonets
How to convert PDF to text with Nanonetsnaman860154
 
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEarley Information Science
 
GenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationGenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationMichael W. Hawkins
 
CNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of ServiceCNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of Servicegiselly40
 
Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountPuma Security, LLC
 
Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024The Digital Insurer
 
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfThe Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfEnterprise Knowledge
 
2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...Martijn de Jong
 

Último (20)

Top 5 Benefits OF Using Muvi Live Paywall For Live Streams
Top 5 Benefits OF Using Muvi Live Paywall For Live StreamsTop 5 Benefits OF Using Muvi Live Paywall For Live Streams
Top 5 Benefits OF Using Muvi Live Paywall For Live Streams
 
Presentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreterPresentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreter
 
Developing An App To Navigate The Roads of Brazil
Developing An App To Navigate The Roads of BrazilDeveloping An App To Navigate The Roads of Brazil
Developing An App To Navigate The Roads of Brazil
 
Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...
Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...
Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...
 
The Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxThe Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptx
 
Factors to Consider When Choosing Accounts Payable Services Providers.pptx
Factors to Consider When Choosing Accounts Payable Services Providers.pptxFactors to Consider When Choosing Accounts Payable Services Providers.pptx
Factors to Consider When Choosing Accounts Payable Services Providers.pptx
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Script
 
Slack Application Development 101 Slides
Slack Application Development 101 SlidesSlack Application Development 101 Slides
Slack Application Development 101 Slides
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processors
 
Handwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsHandwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed texts
 
How to convert PDF to text with Nanonets
How to convert PDF to text with NanonetsHow to convert PDF to text with Nanonets
How to convert PDF to text with Nanonets
 
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
 
GenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationGenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day Presentation
 
CNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of ServiceCNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of Service
 
Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path Mount
 
Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024
 
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfThe Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
 
2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...
 

Database , 13 Replication

  • 1. Distributed DBMS ©M. T. Özsu & P. Valduriez Ch.13/1 Outline • Introduction • Background • Distributed Database Design • Database Integration • Semantic Data Control • Distributed Query Processing • Distributed Transaction Management • Data Replication ➡ Consistency criteria ➡ Replication protocols ➡ Replication and failure management • Parallel Database Systems • Distributed Object DBMS • Peer-to-Peer Data Management • Web Data Management • Current Issues
  • 2. Distributed DBMS ©M. T. Özsu & P. Valduriez Ch.13/2 Replication • Why replicate? ➡ System availability ✦ Avoid single points of failure ➡ Performance ✦ Localization ➡ Scalability ✦ Scalability in numbers and geographic area ➡ Application requirements • Why not replicate? ➡ Replication transparency ➡ Consistency issues ✦ Updates are costly ✦ Availability may suffer if not careful
  • 3. Distributed DBMS ©M. T. Özsu & P. Valduriez Ch.13/3 Execution Model • There are physical copies of logical objects in the system. • Operations are specified on logical objects, but translated to operate on physical objects. • One-copy equivalence ➡ The effect of transactions performed by clients on replicated objects should be the same as if they had been performed on a single set of objects. x x1 x2 xn … Physical data item (replicas, copies) Logical data item Write(x) Write(x1) Write(x2) Write(xn)
  • 4. Distributed DBMS ©M. T. Özsu & P. Valduriez Ch.13/4 Replication Issues • Consistency models - how do we reason about the consistency of the “global execution state”? ➡ Mutual consistency ➡ Transactional consistency • Where are updates allowed? ➡ Centralized ➡ Distributed • Update propagation techniques – how do we propagate updates to one copy to the other copies? ➡ Eager ➡ Lazy
  • 5. Distributed DBMS ©M. T. Özsu & P. Valduriez Ch.13/5 Consistency • Mutual Consistency ➡ How do we keep the values of physical copies of a logical data item synchronized? ➡ Strong consistency ✦ All copies are updated within the context of the update transaction ✦ When the update transaction completes, all copies have the same value ✦ Typically achieved through 2PC ➡ Weak consistency ✦ Eventual consistency: the copies are not identical when update transaction completes, but they eventually converge to the same value ✦ Many versions possible: ✓ Time-bounds ✓ Value-bounds ✓ Drifts
  • 6. Distributed DBMS ©M. T. Özsu & P. Valduriez Ch.13/6 Transactional Consistency • How can we guarantee that the global execution history over replicated data is serializable? • One-copy serializability (1SR) ➡ The effect of transactions performed by clients on replicated objects should be the same as if they had been performed one at-a-time on a single set of objects. • Weaker forms are possible ➡ Snapshot isolation ➡ RC-serializability
  • 7. Distributed DBMS ©M. T. Özsu & P. Valduriez Ch.13/7 Example 1 Site A Site B Site C x x, y x, y, z T1: x ← 20 T2: Read(x) T3: Read(x) Write(x) x ← x+y Read(y) Commit Write(y) z ← (x∗y)/100 Commit Write(z) Commit Consider the three histories: HA={W1(xA), C1} HB={W1(xB), C1, R2(xB), W2(yB), C2} HC={W2(yC), C2, R3(xC), R3(yC),W3(zC), C3, W1(xC),C1} Global history non-serializable: HB: T1→T2, HC: T2→T3→T1 Mutually consistent: Assume xA=xB=xC=10, yB=yC=15,yC=7 to begin; in the end xA=xB=xC=20, yB=yC=35,yC=3.5
  • 8. Distributed DBMS ©M. T. Özsu & P. Valduriez Ch.13/8 Example 2 Site A Site B x x T1: Read(x) T2: Read(x) x ← x+5 x ← x∗10 Write(x) Write(x) Commit Commit Consider the two histories: HA={R1(xA),W1(xA), C1, W2(xA), C2} HB={R1(xB), W2(xB), C2, W1(xB), C1} Global history non-serializable: HA: T1→ T2, HB: T2→ T1 Mutually inconsistent: Assume xA=xB=1 to begin; in the end xA=10, xB=6
  • 9. Distributed DBMS ©M. T. Özsu & P. Valduriez Ch.13/9 Update Management Strategies • Depending on when the updates are propagated ➡ Eager ➡ Lazy • Depending on where the updates can take place ➡ Centralized ➡ Distributed Eager Lazy Centralized Distributed
  • 10. Distributed DBMS ©M. T. Özsu & P. Valduriez Ch.13/10 Eager Replication • Changes are propagated within the scope of the transaction making the changes. The ACID properties apply to all copy updates. ➡ Synchronous ➡ Deferred • ROWA protocol: Read-one/Write-all Site 1 Site 2 Site 3 Site 4 Transaction updates commit   
  • 11. Distributed DBMS ©M. T. Özsu & P. Valduriez Ch.13/11 Lazy Replication ●Lazy replication first executes the updating transaction on one copy. After the transaction commits, the changes are propagated to all other copies (refresh transactions) ●While the propagation takes place, the copies are mutually inconsistent. ●The time the copies are mutually inconsistent is an adjustable parameter which is application dependent. Site 1 Site 2 Site 3 Site 4 Transaction updates commit   
  • 12. Distributed DBMS ©M. T. Özsu & P. Valduriez Ch.13/12 Centralized ●There is only one copy which can be updated (the master), all others (slave copies) are updated reflecting the changes to the master. Site 1 Site 2 Site 3 Site 4 Site 1 Site 2 Site 3 Site 4
  • 13. Distributed DBMS ©M. T. Özsu & P. Valduriez Ch.13/13 Distributed ●Changes can be initiated at any of the copies. That is, any of the sites which owns a copy can update the value of the data item. Site 1 Site 2 Site 3 Site 4 Transaction updates commit Site 1 Site 2 Site 3 Site 4 Transaction updates commit
  • 14. Distributed DBMS ©M. T. Özsu & P. Valduriez Ch.13/14 Forms of Replication Eager + No inconsistencies (identical copies) + Reading the local copy yields the most up to date value + Changes are atomic − A transaction has to update all sites − Longer execution time − Lower availability Lazy + A transaction is always local (good response time) − Data inconsistencies − A local read does not always return the most up-to-date value − Changes to all copies are not guaranteed − Replication is not transparent Centralized + No inter-site synchronization is necessary (it takes place at the master) + There is always one site which has all the updates − The load at the master can be high − Reading the local copy may not yield the most up-to-date value Distributed + Any site can run a transaction + Load is evenly distributed − Copies need to be synchronized
  • 15. Distributed DBMS ©M. T. Özsu & P. Valduriez Ch.13/15 Replication Protocols Eager Lazy Centralized Distributed Eager centralized Eager distributed Lazy distributedLazy centralized The previous ideas can be combined into 4 different replication protocols:
  • 16. Distributed DBMS ©M. T. Özsu & P. Valduriez Ch.13/16 Eager Centralized Protocols • Design parameters: ➡ Distribution of master ✦ Single master: one master for all data items ✦ Primary copy: different masters for different (sets of) data items ➡ Level of transparency ✦ Limited: applications and users need to know who the master is ✓ Update transactions are submitted directly to the master ✓ Reads can occur on slaves ✦ Full: applications and users can submit anywhere and the operations will be forwarded to the master ✓ Operation-based forwarding • Four alternative implementation architectures, only three are meaningful: ➡ Single master, limited transparency ➡ Single master, full transparency ➡ Primary copy, full transparency
  • 17. Distributed DBMS ©M. T. Özsu & P. Valduriez Ch.13/17 Eager Single Master/Limited Transparency • Applications submit update transactions directly to the master • Master: ➡ Upon read: read locally and return to user ➡ Upon write: write locally, multicast write to other replicas (in FFO timestamps order) ➡ Upon commit request: run 2PC coordinator to ensure that all have really installed the changes ➡ Upon abort: abort and inform other sites about abort • Slaves install writes that arrive from the master
  • 18. Distributed DBMS ©M. T. Özsu & P. Valduriez Ch.13/18 Eager Single Master/Limited Transparency (cont’d) • Applications submit read transactions directly to an appropriate slave • Slave ➡ Upon read: read locally ➡ Upon write from master copy: execute conflicting writes in the proper order (FIFO or timestamp) ➡ Upon write from client: refuse (abort transaction; there is error) ➡ Upon commit request from read-only: commit locally ➡ Participant of 2PC for update transaction running on primary
  • 19. Distributed DBMS ©M. T. Özsu & P. Valduriez Ch.13/19 Eager Single Master/ Full Transparency Coordinating TM 1. Send op(x) to the master site 2. Send Read(x) to any site that has x 3. Send Write(x) to all the slaves where a copy of x exists 4. When Commit arrives, act as coordinator for 2PC Master Site 1. If op(x) = Read(x): set read lock on x and send “lock granted” msg to the coordinating TM 2. If op(x) = Write(x) 1. Set write lock on x 2. Update local copy of x 3. Inform coordinating TM 3. Act as participant in 2PC Applications submit all transactions to the Transaction Manager at their own sites (Coordinating TM)
  • 20. Distributed DBMS ©M. T. Özsu & P. Valduriez Ch.13/20 Eager Primary Copy/Full Transparency • Applications submit transactions directly to their local TMs • Local TM: ➡ Forward each operation to the primary copy of the data item ➡ Upon granting of locks, submit Read to any slave, Write to all slaves ➡ Coordinate 2PC
  • 21. Distributed DBMS ©M. T. Özsu & P. Valduriez Ch.13/21 Eager Primary Copy/Full Transparency (cont’d) • Primary copy site ➡ Read(x): lock xand reply to TM ➡ Write(x): lock x, perform update, inform TM ➡ Participate in 2PC • Slaves: as before
  • 22. Distributed DBMS ©M. T. Özsu & P. Valduriez Ch.13/22 Eager Distributed Protocol • Updates originate at any copy ➡ Each sites uses 2 phase locking. ➡ Read operations are performed locally. ➡ Write operations are performed at all sites (using a distributed locking protocol). ➡ Coordinate 2PC • Slaves: ➡ As before
  • 23. Distributed DBMS ©M. T. Özsu & P. Valduriez Ch.13/23 Eager Distributed Protocol (cont’d) • Critical issue: ➡ Concurrent Writes initiated at different master sites are executed in the same order at each slave site ➡ Local histories are serializable (this is easy) • Advantages ➡ Simple and easy to implement • Disadvantage ➡ Very high communication overhead ✦ n replicas; m update operations in each transaction: n*m messages (assume no multicasting) ✦ For throughput of k tps: k* n*m messages • Alternative ➡ Use group communication + deferred update to slaves to reduce messages
  • 24. Distributed DBMS ©M. T. Özsu & P. Valduriez Ch.13/24 Lazy Single Master/Limited Transparency • Update transactions submitted to master • Master: ➡ Upon read: read locally and return to user ➡ Upon write: write locally and return to user ➡ Upon commit/abort: terminate locally ➡ Sometime after commit: multicast updates to slaves (in order) • Slaves: ➡ Upon read: read locally ➡ Refresh transactions: install updates
  • 25. Distributed DBMS ©M. T. Özsu & P. Valduriez Ch.13/25 Lazy Primary Copy/Limited Transparency • There are multiple masters; each master execution is similar to lazy single master in the way it handles transactions • Slave execution complicated: refresh transactions from multiple masters and need to be ordered properly
  • 26. Distributed DBMS ©M. T. Özsu & P. Valduriez Ch.13/26 Lazy Primary Copy/Limited Transparency – Slaves • Assign system-wide unique timestamps to refresh transactions and execute them in timestamp order ➡ May cause too many aborts • Replication graph ➡ Similar to serialization graph, but nodes are transactions (T) + sites (S); edge 〈Ti,Sj〉exists iff Ti performs a Write(x) and x is stored in Sj ➡ For each operation (opk), enter the appropriate nodes (Tk) and edges; if graph has no cycles, no problem ➡ If cycle exists and the transactions in the cycle have been committed at their masters, but their refresh transactions have not yet committed at slaves, abort Tk; if they have not yet committed at their masters, Tkwaits. • Use group communication
  • 27. Distributed DBMS ©M. T. Özsu & P. Valduriez Ch.13/27 Lazy Single Master/Full Transparency • This is very tricky ➡ Forwarding operations to a master and then getting refresh transactions cause difficulties • Two problems: ➡ Violation of 1SR behavior ➡ A transaction may not see its own reads • Problem arises in primary copy/full transparency as well
  • 28. Distributed DBMS ©M. T. Özsu & P. Valduriez Ch.13/28 Example 3 Site M (Master) holds x, y; SiteB holds slave copies of x, y T1: Read(x), Write(y), Commit T2: Read(x), Write(y), Commit
  • 29. Distributed DBMS ©M. T. Özsu & P. Valduriez Ch.13/29 Example 4 • Master site M holds x, site C holds slave copy of x • T3: Write(x), Read(x), Commit • Sequence of execution 1. W3(x) submitted at C, forwarded to M for execution 2. W3(x) is executed at M, confirmation sent back to C 3. R3(x) submitted at C and executed on the local copy 4. T3 submits Commit at C, forwarded to M for execution 5. M executes Commit, sends notification to C, which also commits T3 6. M sends refresh transaction for T3 to C (for W3(x) operation) 7. C executes the refresh transaction and commits it • When C reads x at step 3, it does not see the effects of Write at step 2
  • 30. Distributed DBMS ©M. T. Özsu & P. Valduriez Ch.13/30 Lazy Single Master/ Full Transparency - Solution • Assume T = Write(x) • At commit time of transaction T, the master generates a timestamp for it [ts(T)] • Master sets last_modified(xM) ← ts(T) • When a refresh transaction arrives at a slave site i, it also sets last_modified(xi) ← last_modified(xM) • Timestamp generation rule at the master: ➡ ts(T) should be greater than all previously issued timestamps and should be less than the last_modified timestamps of the data items it has accessed. If such a timestamp cannot be generated, then T is aborted.
  • 31. Distributed DBMS ©M. T. Özsu & P. Valduriez Ch.13/31 Lazy Distributed Replication • Any site: ➡ Upon read: read locally and return to user ➡ Upon write: write locally and return to user ➡ Upon commit/abort: terminate locally ➡ Sometime after commit: send refresh transaction ➡ Upon message from other site ✦ Detect conflicts ✦ Install changes ✦ Reconciliation may be necessary
  • 32. Distributed DBMS ©M. T. Özsu & P. Valduriez Ch.13/32 Reconciliation • Such problems can be solved using pre-arranged patterns: ➡ Latest update win (newer updates preferred over old ones) ➡ Site priority (preference to updates from headquarters) ➡ Largest value (the larger transaction is preferred) • Or using ad-hoc decision making procedures: ➡ Identify the changes and try to combine them ➡ Analyze the transactions and eliminate the non-important ones ➡ Implement your own priority schemas
  • 33. Distributed DBMS ©M. T. Özsu & P. Valduriez Ch.13/33 Replication Strategies Centralized Distributed +Updates do not need to be coordinated +No inconsistencies - Longest response time - Only useful with few updates - Local copies are can only be read +No inconsistencies +Elegant (symmetrical solution) - Long response times - Updates need to be coordinated +No coordination necessary +Short response times - Local copies are not up to date - Inconsistencies +No centralized coordination +Shortest response times - Inconsistencies - Updates can be lost (reconciliation)
  • 34. Distributed DBMS ©M. T. Özsu & P. Valduriez Ch.13/34 Group Communication • A node can multicast a message to all nodes of a group with a delivery guarantee • Multicast primitives ➡ There are a number of them ➡ Total ordered multicast: all messages sent by different nodes are delivered in the same total order at all the nodes • Used with deferred writes, can reduce communication overhead ➡ Remember eager distributed requires k*m messages (with multicast) for throughput of ktps when there are n replicas and m update operations in each transaction ➡ With group communication and deferred writes: 2k messages
  • 35. Distributed DBMS ©M. T. Özsu & P. Valduriez Ch.13/35 Failures • So far we have considered replication protocols in the absence of failures • How to keep replica consistency when failures occur ➡ Site failures ✦ Read One Write All Available (ROWAA) ➡ Communication failures ✦ Quorums ➡ Network partitioning ✦ Quorums
  • 36. Distributed DBMS ©M. T. Özsu & P. Valduriez Ch.13/36 ROWAA with Primary Site • READ = read any copy, if time-out, read another copy. • WRITE = send W(x) to all copies. If one site rejects the operation, then abort. Otherwise, all sites not responding are “missing writes”. • VALIDATION = To commit a transaction ➡ Check that all sites in “missing writes” are still down. If not, then abort the transaction. ✦ There might be a site recovering concurrent with transaction updates and these may be lost ➡ Check that all sites that were available are still available. If some do not respond, then abort.
  • 37. Distributed DBMS ©M. T. Özsu & P. Valduriez Ch.13/37 Distributed ROWAA • Each site has a copy of V ➡ V represents the set of sites a site believes is available ➡ V(A) is the “view” a site has of the system configuration. • The view of a transaction T [V(T)] is the view of its coordinating site, when the transaction starts. ➡ Read any copy within V; update all copies in V ➡ If at the end of the transaction the view has changed, the transaction is aborted • All sites must have the same view! • To modify V, run a special atomic transaction at all sites. ➡ Take care that there are no concurrent views! ➡ Similar to commit protocol. ➡ Idea: Vs have version numbers; only accept new view if its version number is higher than your current one • Recovery: get missed updates from any active node ➡ Problem: no unique sequence of transactions
  • 38. Distributed DBMS ©M. T. Özsu & P. Valduriez Ch.13/38 Quorum-Based Protocol • Assign a vote to each copy of a replicated object (say Vi) such that ∑iVi = V • Each operation has to obtain a read quorum (Vr) to read and a write quorum (Vw) to write an object • Then the following rules have to be obeyed in determining the quorums: ➡ Vr+ Vw>V an object is not read and written by two transactions concurrently ➡ Vw>V/2 two write operations from two transactions cannot occur concurrently on the same object
  • 39. Distributed DBMS ©M. T. Özsu & P. Valduriez Ch.13/39 Quorum Example Three examples of the voting algorithm: a) A correct choice of read and write set b) A choice that may lead to write-write conflicts c) ROWA From Tanenbaum and van Steen, Distributed Systems: Principles and Paradigms © Prentice-Hall, Inc. 2002