2. www.percona.com
Who are we ?
Frédéric Descamps
@lefred
Percona Consultant
http://about.me/lefred
devops believer
Managing MySQL since
3.23 (as far as I
remember)
Seppo Jaakola
@codership
Founder of Codership
8. www.percona.com
The Plan
● Configure PXC on node2 and node3
● Take a backup of node1
● Restore the backup on node2
● Play with the 2 nodes cluster
● Setup the current production server as
3rd node
9. www.percona.com
Connect to your servers
● Test the connection (ssh) to all your
servers (node1, node2 and node3)
login: root
password: vagrant
ssh -p 2221 root@127.0.0.1 (node1)
ssh -p 2222 root@127.0.0.1 (node2)
ssh -p 2223 root@127.0.0.1 (node3)
11. www.percona.com
The production
● We have a script that simulate our
production load
while true
do
pluk.py
sleep 5
done
Run the script (pluk.py) once on node1
12. www.percona.com
Install PXC
● On node2 and node3, install Percona-
XtraDB-Cluster-Server
● You should use yum (or apt)
● We will use rpm as the files are already
downloaded in /usr/local/rpms
15. www.percona.com
To remember
● Disable selinux
● wsrep_cluster_address now supports multiple
entries, wsrep_urls in [mysqld_safe] is
deprecated
● SST method is defined in my.cnf
● when wsrep_node_address is used we can omit
wsrep_sst_receive_address,
wsrep_node_incoming_address and ist.
recv_addr
16. www.percona.com
Let's have a look...
● Check MySQL error log, what do we see?
● Check variables and status related to PXC
○ SHOW GLOBAL VARIABLES LIKE
'wsrep%';
○ SHOW GLOBAL STATUS LIKE 'wsrep%';
● Play with the cluster (follow instructor)
17. www.percona.com
To remember
● wsrep = 'Write Set Replicator'
● Settings are available with SHOW GLOBAL VARIABLES
LIKE 'wsrep%';
● Status counters available with SHOW GLOBAL STATUS
LIKE 'wsrep%';
● Are important to check cluster status:
○ wsrep_local_state_comment
○ wsrep_cluster_size
○ wsrep_cluster_status
○ wsrep_connected
○ wsrep_ready
18. www.percona.com
What about State Snapshot
Transfer (SST)
● SST = full copy of cluster data to a
specific node (from DONOR to JOINER)
● wsrep_sst_donor
● Multiple SST methods:
○ skip
○ rsync
○ mysqldump
○ xtrabackup
19. www.percona.com
What about State Snapshot
Transfer (SST)
● SST = full copy of cluster data to a
specific node (from DONOR to JOINER)
● wsrep_sst_donor
● Multiple SST methods:
○ skip
○ rsync
○ mysqldump
○ xtrabackup
Test all SST methods
20. www.percona.com
What about State Snapshot
Transfer (SST)
● SST = full copy of cluster data to a
specific node (from DONOR to JOINER)
● wsrep_sst_donor
● Multiple SST methods:
○ skip
○ rsync
○ mysqldump
○ xtrabackup
No problem
with
mysldump ?
21. www.percona.com
To remember
● SST methods are not all the same.
● You can specify a donor per node
● Xtrabackup doesn't freeze the donor for the complete
SST period
● Xtrabackup requires authentication parameter
25. www.percona.com
Quorum and split brain
● PXC checks for Quorum to avoid split
brain situation
stop the communication between node2 and
node3
26. www.percona.com
Quorum and split brain
● BAD solution :-(
wsrep_provider_options = “pc.
ignore_quorum = true”
● and the GOOD solution.... next slide !
28. www.percona.com
Quorum and split brain (2)
● Galera Arbitration Daemon (garbd)
run garbd on node1
Test the following :
● Stop mysql on node3: what's happening ?
● Stop garbd on node1: what's happening ?
● Start garbd on node1 and mysql on node3, block communication
between node2 and node3, what's happening this time ?
● Block communication between node1 and node3: what's happening ?
29. www.percona.com
To remember
● 3 nodes is the minimum recommended !
● odd numbers of nodes are always better
● you can use a "fake" node (garbd) even to replicate
through it !
30. www.percona.com
Incremental State Transfer
(IST)
● Used to avoid full SST (using gcache)
● gcache.size can be specified using
wsrep_provider_options
● Now works even after a crash if the state
is consistent
31. www.percona.com
Incremental State Transfer
(IST)
● Used to avoid full SST (using gcache)
● gcache.size can be specified using
wsrep_provider_options
● Now works even after a crash if the state
is consistent
stop mysql on node3, run pluk.py on
node2, restart node3
33. www.percona.com
Production Migration (2)
● Start node3
● Run pluk.py on node1
● Start the async replication of node1 to
node2
● What about node3 ?
● Run pluk.py on node1
35. www.percona.com
Production Migration (3)
● Configure pluk.py to connect to the
loadbalancer
● Run pluk.py
● Scratch data on node1 and install PXC
● Configure PXC on node1
● Start the cluster on node1 (SST should be
done with node3)
● Run pluk.py and check data on all
nodes
39. www.percona.com
Online Schema Changes (2)
● Create a large table to modify
CREATE database pluk; use pluk;
CREATE TABLE `actor` (
`actor_id` int unsigned NOT NULL AUTO_INCREMENT PRIMARY KEY,
`first_name` varchar(45) NOT NULL,
`last_name` varchar(45) NOT NULL,
`last_update` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON
UPDATE CURRENT_TIMESTAMP) ENGINE=InnoDB;
INSERT INTO actor (first_name, last_name) SELECT REPEAT('a',
45), REPEAT('b', 45) FROM dual;
INSERT INTO actor (first_name, last_name) SELECT REPEAT('a',
45), REPEAT('b', 45) FROM actor;
repeat this step
until it takes 10
sec+
40. www.percona.com
Online Schema Changes (3)
● Use all three methods while running
pluk.py against your new database and
add each time a new column
● Check pluk.py output
42. www.percona.com
Annual Percona Live
MySQL Conference and Expo
The Hyatt Regency Hotel, Santa Clara, CA
April 22nd-25th, 2013
Registration
and
Call for Papers
are Open!
Visit:
http://www.percona.com/live/mysql-conference-2013/