3. MySQL InnoDB Cluster & Group Replication in a
Nutshell: Hands-On Tutorial
Percona Live Europe 2017 - Dublin
Frédéric Descamps - MySQL Community Manager - Oracle
Kenny Gryp - MySQL Practice Manager - Percona
3 / 152
4.
Safe Harbor Statement
The following is intended to outline our general product direction. It is intended for
information purpose only, and may not be incorporated into any contract. It is not a
commitment to deliver any material, code, or functionality, and should not be relied up in
making purchasing decisions. The development, release and timing of any features or
functionality described for Oracle´s product remains at the sole discretion of Oracle.
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
4 / 152
5. Who are we ?
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
5 / 152
6. Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
6 / 152
12. Agenda
Prepare your workstation
What are MySQL InnoDB Cluster &Group Replication ?
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
12 / 152
13. Agenda
Prepare your workstation
What are MySQL InnoDB Cluster &Group Replication ?
Migration fromMaster-Slave to GR
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
13 / 152
14. Agenda
Prepare your workstation
What are MySQL InnoDB Cluster &Group Replication ?
Migration fromMaster-Slave to GR
Howto monitor ?
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
14 / 152
15. Agenda
Prepare your workstation
What are MySQL InnoDB Cluster &Group Replication ?
Migration fromMaster-Slave to GR
Howto monitor ?
Application interaction
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
15 / 152
17. Setup your workstation
Install VirtualBox 5
On the USB key, copy PLeu17_GR.ova on your laptop and doubleclick on it
Ensure you have vboxnet2 network interface
(VirtualBox Preferences -> Network -> Host-Only Networks -> +)
Start all virtual machines (mysql1, mysql2, mysql3 &mysql4)
Install putty if you are using Windows
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
17 / 152
18. Setup your workstation
Install VirtualBox 5
On the USB key, copy PLeu17_GR.ova on your laptop and doubleclick on it
Ensure you have vboxnet2 network interface
(VirtualBox Preferences -> Network -> Host-Only Networks -> +)
Start all virtual machines (mysql1, mysql2, mysql3 &mysql4)
Install putty if you are using Windows
Try to connect to all VM´s fromyour terminal or putty (rootpasswordisX):
ssh -p 8821 root@127.0.0.1 to mysql1
ssh -p 8822 root@127.0.0.1 to mysql2
ssh -p 8823 root@127.0.0.1 to mysql3
ssh -p 8824 root@127.0.0.1 to mysql4
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
18 / 152
20. launch
run_app.sh
on mysql1 into
a screen
session
verify that
mysql2 is a
running slave
LAB1: Current situation
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
20 / 152
24. InnoDB Cluster's Architecture
Application
MySQL Connector
MySQL Router
MySQL Shell
InnoDB
cluster
Application
MySQL Connector
MySQL Router
Mp
M
M
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
24 / 152
25. Group Replication: heart of MySQL InnoDB
Cluster
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
25 / 152
26. Group Replication: heart of MySQL InnoDB
Cluster
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
26 / 152
27. MySQL Group Replication
but what is it ?!?
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
27 / 152
28. MySQL Group Replication
but what is it ?!?
GR is a plugin for MySQL, made by MySQL and packaged with MySQL
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
28 / 152
29. MySQL Group Replication
but what is it ?!?
GR is a plugin for MySQL, made by MySQL and packaged with MySQL
GR is an implementation of Replicated Database State Machine theory
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
29 / 152
30. MySQL Group Replication
but what is it ?!?
GR is a plugin for MySQL, made by MySQL and packaged with MySQL
GR is an implementation of Replicated Database State Machine theory
Paxos based protocol
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
30 / 152
31. MySQL Group Replication
but what is it ?!?
GR is a plugin for MySQL, made by MySQL and packaged with MySQL
GR is an implementation of Replicated Database State Machine theory
Paxos based protocol
GR allows to write on all Group Members (cluster nodes) simultaneously while
retaining consistency
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
31 / 152
32. MySQL Group Replication
but what is it ?!?
GR is a plugin for MySQL, made by MySQL and packaged with MySQL
GR is an implementation of Replicated Database State Machine theory
Paxos based protocol
GR allows to write on all Group Members (cluster nodes) simultaneously while
retaining consistency
GR implements conflict detection and resolution
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
32 / 152
33. MySQL Group Replication
but what is it ?!?
GR is a plugin for MySQL, made by MySQL and packaged with MySQL
GR is an implementation of Replicated Database State Machine theory
Paxos based protocol
GR allows to write on all Group Members (cluster nodes) simultaneously while
retaining consistency
GR implements conflict detection and resolution
GR allows automatic distributed recovery
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
33 / 152
34. MySQL Group Replication
but what is it ?!?
GR is a plugin for MySQL, made by MySQL and packaged with MySQL
GR is an implementation of Replicated Database State Machine theory
Paxos based protocol
GR allows to write on all Group Members (cluster nodes) simultaneously while
retaining consistency
GR implements conflict detection and resolution
GR allows automatic distributed recovery
Supported on all MySQL platforms !!
Linux, Windows, Solaris, OSX, FreeBSD
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
34 / 152
35. And for users ?
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
35 / 152
36. And for users ?
not longer necessary to handle server fail-over manually or with a complicated script
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
36 / 152
37. And for users ?
not longer necessary to handle server fail-over manually or with a complicated script
GR provides fault tolerance
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
37 / 152
38. And for users ?
not longer necessary to handle server fail-over manually or with a complicated script
GR provides fault tolerance
GR enables update-everywhere setups
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
38 / 152
39. And for users ?
not longer necessary to handle server fail-over manually or with a complicated script
GR provides fault tolerance
GR enables update-everywhere setups
GR handles crashes, failures, re-connects automatically
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
39 / 152
40. And for users ?
not longer necessary to handle server fail-over manually or with a complicated script
GR provides fault tolerance
GR enables update-everywhere setups
GR handles crashes, failures, re-connects automatically
Allows an easy setup of a highly available MySQL service!
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
40 / 152
41. ready ?
Migration from Master-Slave to GR
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
41 / 152
42. The plan
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
42 / 152
43. 1) We install and
setup MySQL InnoDB
Cluster on one of the
newservers
The plan
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
43 / 152
44. 2) We restore a
backup
3) setup
asynchronous
replication on the new
server.
The plan
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
44 / 152
45. 4) We add a new
instance to our group
The plan
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
45 / 152
46. 5) We point the
application to one of
our newnodes.
6) We wait and check
that asynchronous
replication is caught
up
7) we stop those
asynchronous slaves
The plan
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
46 / 152
47. 8) We attach the
mysql2 slave to the
group
The plan
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
47 / 152
48. 9) Use MySQL Router
for directing traffic
The plan
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
48 / 152
49. Latest MySQL 8.0.3-RC is already installed
on mysql3.
Let´s take a backup on mysql1:
[mysql1 ~]# xtrabackup --backup
--target-dir=/tmp/backup
--user=root
--password=X --host=127.0.0.1
[mysql1 ~]# xtrabackup --prepare
--target-dir=/tmp/backup
LAB2: Prepare mysql3
Asynchronousslave
49 / 152
51. LAB3: mysql3 as asynchronous slave (2)
Asynchronousslave
Configure /etc/my.cnf with the minimal requirements:
[mysqld]
...
server_id=3
enforce_gtid_consistency = on
gtid_mode = on
#log_bin # new default
#log_slave_updates # new default
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
51 / 152
52. LAB2: Prepare mysql3 (3)
Asynchronousslave
Let´s start MySQL on mysql3:
[mysql3 ~]# systemctl start mysqld
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
52 / 152
53. LAB2: Prepare mysql3 (3)
Asynchronousslave
Let´s start MySQL on mysql3:
[mysql3 ~]# systemctl start mysqld
[mysql3 ~]# mysql_upgrade
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
53 / 152
54. find the GTIDs purged
change MASTER
set the purged GTIDs
start replication
LAB3: mysql3 as asynchronous slave (1)
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
54 / 152
55. LAB3: mysql3 as asynchronous slave (2)
Find the latest purged GTIDs:
[mysql3 ~]# cat /tmp/backup/xtrabackup_binlog_info
mysql-bin.000002 167646328 b346474c-8601-11e6-9b39-08002718d305:1-771
Connect to mysql3 and setup replication:
mysql> CHANGE MASTER TO MASTER_HOST="mysql1",
MASTER_USER="repl_async", MASTER_PASSWORD='Xslave',
MASTER_AUTO_POSITION=1;
mysql> RESET MASTER;
mysql> SET global gtid_purged="VALUE FOUND PREVIOUSLY";
mysql> START SLAVE;
Check that you receive the application´s traffic
55 / 152
56. Administration made easy and more...
MySQL-Shell
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
56 / 152
57. MySQL Shell
The MySQL Shell is an interactive Javascript, Python, or SQL interface supporting
development and administration for MySQL. MySQL Shell includes the AdminAPI--available
in JavaScript and Python--which enables you to set up and manage InnoDB clusters. It
provides a modern and fluent API which hides the complexity associated with configuring,
provisioning, and managing an InnoDB cluster, without sacrificing power, flexibility, or
security.
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
57 / 152
58. MySQL Shell (2)
As example. the same operations as before but using the Shell:
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
58 / 152
60. LAB4: MySQL InnoDB Cluster
Createasingleinstancecluster
Time to use the newMySQL Shell !
[mysql3 ~]# mysqlsh
Let´s verify if our server is ready to become a member of a newcluster:
mysql-js> dba.checkInstanceCon guration('root@mysql3:3306')
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
60 / 152
61. LAB4: MySQL InnoDB Cluster
Createasingleinstancecluster
Time to use the newMySQL Shell !
[mysql3 ~]# mysqlsh
Let´s verify if our server is ready to become a member of a newcluster:
mysql-js> dba.checkInstanceCon guration('root@mysql3:3306')
Change the configuration !
mysql-js> dba.con gureLocalInstance()
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
61 / 152
62. LAB4: MySQL InnoDB Cluster (2)
Restartmysqldtousethenewconfiguration:
[mysql3 ~]# systemctl restart mysqld
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
62 / 152
63. LAB4: MySQL InnoDB Cluster (2)
Restartmysqldtousethenewconfiguration:
[mysql3 ~]# systemctl restart mysqld
Createasingleinstancecluster
[mysql3 ~]# mysqlsh
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
63 / 152
64. LAB4: MySQL InnoDB Cluster (2)
Restartmysqldtousethenewconfiguration:
[mysql3 ~]# systemctl restart mysqld
Createasingleinstancecluster
[mysql3 ~]# mysqlsh
mysql-js> dba.checkInstanceCon guration('root@mysql3:3306')
mysql-js> c root@mysql3:3306
mysql-js> cluster = dba.createCluster('perconalive')
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
64 / 152
65. LAB4: Cluster Status
mysql-js> cluster.status()
{
"clusterName": "perconalive",
"defaultReplicaSet": {
"name": "default",
"primary": "mysql3:3306",
"ssl": "DISABLED",
"status": "OK_NO_TOLERANCE",
"statusText": "Cluster is NOT tolerant to any failures.",
"topology": {
"mysql3:3306": {
"address": "mysql3:3306",
"mode": "R/W",
"readReplicas": {},
"role": "HA",
"status": "ONLINE"
}
}
}
}
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
65 / 152
66. Add mysql4 to the Group:
restore the backup
set the purged GTIDs
use MySQL Shell
LAB5: add mysql4 to the cluster (1)
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
66 / 152
67. [mysql4 ~]# systemctl start mysqld
[mysql4 ~]# mysql_upgrade
LAB5: add mysql4 to the cluster (2)
Copy the backup frommysql1 to mysql4:
[mysql1 ~]# scp -r /tmp/backup mysql4:/tmp
And restore it:
[mysql4 ~] systemctl stop mysqld
[mysql4 ~] rm -rf /var/lib/mysql/*
[mysql4 ~]# xtrabackup --copy-back --target-dir=/tmp/backup
[mysql4 ~]# chown -R mysql. /var/lib/mysql
Start MySQL on mysql4:
67 / 152
68. LAB5: MySQL Shell to add an instance (3)
[mysql4 ~]# mysqlsh
Let´s verify the config:
mysql-js> dba.checkInstanceCon guration('root@mysql4:3306')
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
68 / 152
69. LAB5: MySQL Shell to add an instance (3)
[mysql4 ~]# mysqlsh
Let´s verify the config:
mysql-js> dba.checkInstanceCon guration('root@mysql4:3306')
And change the configuration:
mysql-js> dba.con gureLocalInstance()
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
69 / 152
70. LAB5: MySQL Shell to add an instance (3)
[mysql4 ~]# mysqlsh
Let´s verify the config:
mysql-js> dba.checkInstanceCon guration('root@mysql4:3306')
And change the configuration:
mysql-js> dba.con gureLocalInstance()
Restart the service to enable the changes:
[mysql4 ~]# systemctl restart mysqld
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
70 / 152
71. LAB5: MySQL InnoDB Cluster (4)
Groupof2instances
Find the latest purged GTIDs:
[mysql4 ~]# cat /tmp/backup/xtrabackup_binlog_info
mysql-bin.000002 167646328 b346474c-8601-11e6-9b39-08002718d305:1-77177
Connect to mysql4 and set GTID_PURGED
[mysql4 ~]# mysqlsh
mysql-js> c root@mysql4:3306
mysql-js> sql
mysql-sql> RESET MASTER;
mysql-sql> SET global gtid_purged="VALUE FOUND PREVIOUSLY";
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
71 / 152
72. LAB5: MySQL InnoDB Cluster (5)
mysql-sql> js
mysql-js> dba.checkInstanceCon guration('root@mysql4:3306')
mysql-js> c root@mysql3:3306
mysql-js> cluster = dba.getCluster()
mysql-js> cluster.checkInstanceState('root@mysql4:3306')
mysql-js> cluster.addInstance("root@mysql4:3306")
mysql-js> cluster.status()
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
72 / 152
73. Cluster Status
mysql-js> cluster.status()
{
"clusterName": "perconalive",
"defaultReplicaSet": {
"name": "default",
"primary": "mysql3:3306",
"ssl": "DISABLED",
"status": "OK_NO_TOLERANCE",
"statusText": "Cluster is NOT tolerant to any failures. 1 member is not active"
"topology": {
"mysql3:3306": {
"address": "mysql3:3306",
"mode": "R/W",
"readReplicas": {},
"role": "HA",
"status": "ONLINE"
},
"mysql4:3306": {
"address": "mysql4:3306",
"mode": "R/O",
"readReplicas": {},
"role": "HA",
"status": "RECOVERING"
}
}
} Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
73 / 152
74. Recovering progress
On standard MySQL, monitor the group_replication_recovery channel to see
the progress:
mysql4> show slave status for channel 'group_replication_recovery'G
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: mysql3
Master_User: mysql_innodb_cluster_rpl_user
...
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
...
Retrieved_Gtid_Set: 6e7d7848-860f-11e6-92e4-08002718d305:1-6,
7c1f0c2d-860d-11e6-9df7-08002718d305:1-15,
b346474c-8601-11e6-9b39-08002718d305:1964-77177,
e8c524df-860d-11e6-9df7-08002718d305:1-2
Executed_Gtid_Set: 7c1f0c2d-860d-11e6-9df7-08002718d305:1-7,
b346474c-8601-11e6-9b39-08002718d305:1-45408,
e8c524df-860d-11e6-9df7-08002718d305:1-2
...
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
74 / 152
75. point the application
to the cluster
Migrate the application
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
75 / 152
76. LAB6: Migrate the application
Make sure gtid_executed range on mysql2 is lower or equal than on mysql3
mysql[2-3]> show global variables like 'gtid_executed'G
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
76 / 152
77. LAB6: Migrate the application
Make sure gtid_executed range on mysql2 is lower or equal than on mysql3
mysql[2-3]> show global variables like 'gtid_executed'G
When they are OK, stop asynchronous replication on mysql2 and mysql3:
mysql2> stop slave;
mysql3> stop slave;
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
77 / 152
78. LAB6: Migrate the application
Nowwe need to point the application to mysql3, this is the only downtime !
...
[ 21257s] threads: 4, tps: 12.00, reads: 167.94, writes: 47.98, response time: 18
[ 21258s] threads: 4, tps: 6.00, reads: 83.96, writes: 23.99, response time: 14
[ 21259s] threads: 4, tps: 7.00, reads: 98.05, writes: 28.01, response time: 16
[ 31250s] threads: 4, tps: 8.00, reads: 111.95, writes: 31.99, response time: 30
[ 31251s] threads: 4, tps: 11.00, reads: 154.01, writes: 44.00, response time: 13
[ 31252s] threads: 4, tps: 11.00, reads: 153.94, writes: 43.98, response time: 12
[ 31253s] threads: 4, tps: 10.01, reads: 140.07, writes: 40.02, response time: 17
^C
[mysql1 ~]# run_app.sh mysql3
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
78 / 152
79. LAB6: Migrate the application
Nowwe need to point the application to mysql3, this is the only downtime !
...
[ 21257s] threads: 4, tps: 12.00, reads: 167.94, writes: 47.98, response time: 18
[ 21258s] threads: 4, tps: 6.00, reads: 83.96, writes: 23.99, response time: 14
[ 21259s] threads: 4, tps: 7.00, reads: 98.05, writes: 28.01, response time: 16
[ 31250s] threads: 4, tps: 8.00, reads: 111.95, writes: 31.99, response time: 30
[ 31251s] threads: 4, tps: 11.00, reads: 154.01, writes: 44.00, response time: 13
[ 31252s] threads: 4, tps: 11.00, reads: 153.94, writes: 43.98, response time: 12
[ 31253s] threads: 4, tps: 10.01, reads: 140.07, writes: 40.02, response time: 17
^C
[mysql1 ~]# run_app.sh mysql3
Nowthey can forget about mysql1:
mysql[2-3]> reset slave all;
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
79 / 152
80. previous slave
(mysql2) can now
be part of the cluster
Add a third instance
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
80 / 152
81. LAB7: Add mysql2 to the group
We first upgrade to MySQL 8.0.3 :
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
81 / 152
82. LAB7: Add mysql2 to the group
We first upgrade to MySQL 8.0.3 :
[mysql2 ~]# systemctl stop mysqld
[mysql2 ~]# rpm -Uvh /root/rpms/mysql*rpm
[mysql2 ~]# systemctl start mysqld
[mysql2 ~]# mysql_upgrade
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
82 / 152
83. LAB7: Add mysql2 to the group
We first upgrade to MySQL 8.0.3 :
[mysql2 ~]# systemctl stop mysqld
[mysql2 ~]# rpm -Uvh /root/rpms/mysql*rpm
[mysql2 ~]# systemctl start mysqld
[mysql2 ~]# mysql_upgrade
and then we validate the instance using MySQL Shell and we configure it:
[mysql2 ~]# mysqlsh
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
83 / 152
84. LAB7: Add mysql2 to the group
We first upgrade to MySQL 8.0.3 :
[mysql2 ~]# systemctl stop mysqld
[mysql2 ~]# rpm -Uvh /root/rpms/mysql*rpm
[mysql2 ~]# systemctl start mysqld
[mysql2 ~]# mysql_upgrade
and then we validate the instance using MySQL Shell and we configure it:
[mysql2 ~]# mysqlsh
mysql-js> dba.checkInstanceCon guration('root@mysql2:3306')
mysql-js> dba.con gureLocalInstance()
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
84 / 152
85. LAB7: Add mysql2 to the group
We first upgrade to MySQL 8.0.3 :
[mysql2 ~]# systemctl stop mysqld
[mysql2 ~]# rpm -Uvh /root/rpms/mysql*rpm
[mysql2 ~]# systemctl start mysqld
[mysql2 ~]# mysql_upgrade
and then we validate the instance using MySQL Shell and we configure it:
[mysql2 ~]# mysqlsh
mysql-js> dba.checkInstanceCon guration('root@mysql2:3306')
mysql-js> dba.con gureLocalInstance()
[mysql2 ~]# systemctl restart mysqld
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
85 / 152
86. LAB7: Add mysql2 to the group (2)
Back in MySQL Shell we add the newinstance:
[mysql2 ~]# mysqlsh
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
86 / 152
87. LAB7: Add mysql2 to the group (2)
Back in MySQL Shell we add the newinstance:
[mysql2 ~]# mysqlsh
mysql-js> dba.checkInstanceCon guration('root@mysql2:3306')
mysql-js> c root@mysql3:3306
mysql-js> cluster = dba.getCluster()
mysql-js> cluster.addInstance("root@mysql2:3306")
mysql-js> cluster.status()
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
87 / 152
88. LAB7: Add mysql2 to the group (3)
{
"clusterName": "perconalive",
"defaultReplicaSet": {
"name": "default",
"primary": "mysql3:3306",
"status": "OK",
"statusText": "Cluster is ONLINE and can tolerate up to ONE failure.",
"topology": {
"mysql2:3306": {
"address": "mysql2:3306",
"mode": "R/O",
"readReplicas": {},
"role": "HA",
"status": "ONLINE"
},
"mysql3:3306": {
"address": "mysql3:3306",
"mode": "R/W",
"readReplicas": {},
"role": "HA",
"status": "ONLINE"
},
"mysql4:3306": {
"address": "mysql4:3306",
"mode": "R/O",
"readReplicas": {},Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
88 / 152
89. writing to a single server
Single Primary Mode
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
89 / 152
90. Default = Single Primary Mode
By default, MySQL InnoDB Cluster enables Single Primary Mode.
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
90 / 152
91. Default = Single Primary Mode
By default, MySQL InnoDB Cluster enables Single Primary Mode.
mysql> show global variables like 'group_replication_single_primary_mode';
+---------------------------------------+-------+
| Variable_name | Value |
+---------------------------------------+-------+
| group_replication_single_primary_mode | ON |
+---------------------------------------+-------+
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
91 / 152
92. Default = Single Primary Mode
By default, MySQL InnoDB Cluster enables Single Primary Mode.
mysql> show global variables like 'group_replication_single_primary_mode';
+---------------------------------------+-------+
| Variable_name | Value |
+---------------------------------------+-------+
| group_replication_single_primary_mode | ON |
+---------------------------------------+-------+
In Single Primary Mode, a single member acts as the writable master (PRIMARY) and the
rest of the members act as hot-standbys (SECONDARY).
The group itself coordinates and configures itself automatically to determine which
member will act as the PRIMARY, through a leader election mechanism.
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
92 / 152
93. Who´s the Primary Master ? old fashion style
As the Primary Master is elected, all nodes part of the group knows which one was
elected. This value is exposed in status variables:
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
93 / 152
94. Who´s the Primary Master ? old fashion style
As the Primary Master is elected, all nodes part of the group knows which one was
elected. This value is exposed in status variables:
mysql> show status like 'group_replication_primary_member';
+----------------------------------+--------------------------------------+
| Variable_name | Value |
+----------------------------------+--------------------------------------+
| group_replication_primary_member | 28a4e51f-860e-11e6-bdc4-08002718d305 |
+----------------------------------+--------------------------------------+
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
94 / 152
95. Who´s the Primary Master ? old fashion style
As the Primary Master is elected, all nodes part of the group knows which one was
elected. This value is exposed in status variables:
mysql> show status like 'group_replication_primary_member';
+----------------------------------+--------------------------------------+
| Variable_name | Value |
+----------------------------------+--------------------------------------+
| group_replication_primary_member | 28a4e51f-860e-11e6-bdc4-08002718d305 |
+----------------------------------+--------------------------------------+
mysql> select member_host as "primary master"
from performance_schema.global_status
join performance_schema.replication_group_members
where variable_name = 'group_replication_primary_member'
and member_id=variable_value;
+---------------+
| primary master|
+---------------+
| mysql3 |
+---------------+
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
95 / 152
96. Who´s the Primary Master ? new fashion style
mysql> select member_host
from performance_schema.replication_group_members
where member_role='PRIMARY';
+-------------+
| member_host |
+-------------+
| mysql3 |
+-------------+
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
96 / 152
97. Create a Multi-Primary Cluster:
It´s also possible to create a Multi-Primary Cluster using the Shell:
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
97 / 152
98. Create a Multi-Primary Cluster:
It´s also possible to create a Multi-Primary Cluster using the Shell:
mysql-js> cluster=dba.createCluster('perconalive',{multiMaster: true})
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
98 / 152
99. Create a Multi-Primary Cluster:
It´s also possible to create a Multi-Primary Cluster using the Shell:
mysql-js> cluster=dba.createCluster('perconalive',{multiMaster: true})
A new InnoDB cluster will be created on instance 'root@mysql3:3306'.
The MySQL InnoDB cluster is going to be setup in advanced Multi-Master Mode.
Before continuing you have to con rm that you understand the requirements and
limitations of Multi-Master Mode. Please read the manual before proceeding.
I have read the MySQL InnoDB cluster manual and I understand the requirements
and limitations of advanced Multi-Master Mode.
Con rm [y|N]:
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
99 / 152
100. Create a Multi-Primary Cluster:
It´s also possible to create a Multi-Primary Cluster using the Shell:
mysql-js> cluster=dba.createCluster('perconalive',{multiMaster: true})
A new InnoDB cluster will be created on instance 'root@mysql3:3306'.
The MySQL InnoDB cluster is going to be setup in advanced Multi-Master Mode.
Before continuing you have to con rm that you understand the requirements and
limitations of Multi-Master Mode. Please read the manual before proceeding.
I have read the MySQL InnoDB cluster manual and I understand the requirements
and limitations of advanced Multi-Master Mode.
Con rm [y|N]:
Or you can force it to avoid interaction (for automation) :
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
100 / 152
101. Create a Multi-Primary Cluster:
It´s also possible to create a Multi-Primary Cluster using the Shell:
mysql-js> cluster=dba.createCluster('perconalive',{multiMaster: true})
A new InnoDB cluster will be created on instance 'root@mysql3:3306'.
The MySQL InnoDB cluster is going to be setup in advanced Multi-Master Mode.
Before continuing you have to con rm that you understand the requirements and
limitations of Multi-Master Mode. Please read the manual before proceeding.
I have read the MySQL InnoDB cluster manual and I understand the requirements
and limitations of advanced Multi-Master Mode.
Con rm [y|N]:
Or you can force it to avoid interaction (for automation) :
> cluster=dba.createCluster('perconalive',{multiMaster: true, force: true})
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
101 / 152
105. Member State
These are the different possible state for a node member:
ONLINE
OFFLINE
RECOVERING
ERROR: when a node is leaving but the plugin was not instructed to stop
UNREACHABLE
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
105 / 152
106. Status information & metrics
Members
mysql> SELECT member_host, member_state, member_role
FROM performance_schema.replication_group_members;
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
106 / 152
107. Status information & metrics
Members
mysql> SELECT member_host, member_state, member_role
FROM performance_schema.replication_group_members;
+-------------+--------------+-------------+
| member_host | member_state | member_role |
+-------------+--------------+-------------+
| mysql4 | ONLINE | SECONDARY |
| mysql3 | ONLINE | PRIMARY |
+-------------+--------------+-------------+
2 rows in set (0.00 sec)
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
107 / 152
108. Status information & metrics - connections
mysql> SELECT * FROM performance_schema.replication_connection_statusG
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
108 / 152
110. Status information & metrics
Previously there was only local node statistics, now they are exposed all
over the Group
mysql> select * from performance_schema.replication_group_member_statsG
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
110 / 152
111. Status information & metrics
Previously there was only local node statistics, now they are exposed all
over the Group
mysql> select * from performance_schema.replication_group_member_statsG
************************** 1. row ***************************
CHANNEL_NAME: group_replication_applier
VIEW_ID: 15059231192196925:2
MEMBER_ID: ade14d5c-9e1e-11e7-b034-08002...
COUNT_TRANSACTIONS_IN_QUEUE: 0
COUNT_TRANSACTIONS_CHECKED: 27992
COUNT_CONFLICTS_DETECTED: 0
COUNT_TRANSACTIONS_ROWS_VALIDATING: 0
TRANSACTIONS_COMMITTED_ALL_MEMBERS: 8fc848d7-9e1c-11e7-9407-08002...
b9d01593-9dfb-11e7-8ca6-08002718d305:1-21,
da2f0910-8767-11e6-b82d-08002718d305:1-164741
LAST_CONFLICT_FREE_TRANSACTION: 8fc848d7-9e1c-11e7-9407-08002...
COUNT_TRANSACTIONS_REMOTE_IN_APPLIER_QUEUE: 0
COUNT_TRANSACTIONS_REMOTE_APPLIED: 27992
COUNT_TRANSACTIONS_LOCAL_PROPOSED: 0
COUNT_TRANSACTIONS_LOCAL_ROLLBACK: 0
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
111 / 152
113. Performance_Schema
You can find GR information in the following Performance_Schema tables:
replication_applier_con guration
replication_applier_status
replication_applier_status_by_worker
replication_connection_con guration
replication_connection_status
replication_group_member_stats
replication_group_members
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
113 / 152
114. Status during recovery
mysql> SHOW SLAVE STATUS FOR CHANNEL 'group_replication_recovery'G
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
114 / 152
115. Status during recovery
mysql> SHOW SLAVE STATUS FOR CHANNEL 'group_replication_recovery'G
*************************** 1. row ***************************
Slave_IO_State:
Master_Host: <NULL>
Master_User: gr_repl
Master_Port: 0
...
Relay_Log_File: mysql4-relay-bin-group_replication_recovery.000001
...
Slave_IO_Running: No
Slave_SQL_Running: No
...
Executed_Gtid_Set: 5de4400b-3dd7-11e6-8a71-08002774c31b:1-814089,
afb80f36-2bff-11e6-84e0-0800277dd3bf:1-5718
...
Channel_Name: group_replication_recovery
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
115 / 152
116. Sys Schema
The easiest way to detect if a node is a member of the primary component (when there
are partitioning of your nodes due to network issues for example) and therefore a valid
candidate for routing queries to it, is to use the sys table.
Additional information for sys can be found at https://goo.gl/XFp3bt
On the primary node:
[mysql3 ~]# mysql < /root/addition_to_sys_mysql8.sql
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
116 / 152
117. Sys Schema
Is this node part of PRIMARY Partition:
mysql3> SELECT sys.gr_member_in_primary_partition();
+------------------------------------+
| sys.gr_node_in_primary_partition() |
+------------------------------------+
| YES |
+------------------------------------+
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
117 / 152
118. Sys Schema
Is this node part of PRIMARY Partition:
mysql3> SELECT sys.gr_member_in_primary_partition();
+------------------------------------+
| sys.gr_node_in_primary_partition() |
+------------------------------------+
| YES |
+------------------------------------+
To use as healthcheck:
mysql3> SELECT * FROM sys.gr_member_routing_candidate_status;
+------------------+-----------+---------------------+----------------------+
| viable_candidate | read_only | transactions_behind | transactions_to_cert |
+------------------+-----------+---------------------+----------------------+
| YES | YES | 0 | 0 |
+------------------+-----------+---------------------+----------------------+
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
118 / 152
119. LAB8: Sys Schema - Health Check
On one of the non Primary nodes, run the following command:
mysql-sql> ush tables with read lock;
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
119 / 152
120. LAB8: Sys Schema - Health Check
On one of the non Primary nodes, run the following command:
mysql-sql> ush tables with read lock;
Nowyou can verify what the healthcheck exposes to you:
mysql-sql> SELECT * FROM sys.gr_member_routing_candidate_status;
+------------------+-----------+---------------------+----------------------+
| viable_candidate | read_only | transactions_behind | transactions_to_cert |
+------------------+-----------+---------------------+----------------------+
| YES | YES | 950 | 0 |
+------------------+-----------+---------------------+----------------------+
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
120 / 152
121. LAB8: Sys Schema - Health Check
On one of the non Primary nodes, run the following command:
mysql-sql> ush tables with read lock;
Nowyou can verify what the healthcheck exposes to you:
mysql-sql> SELECT * FROM sys.gr_member_routing_candidate_status;
+------------------+-----------+---------------------+----------------------+
| viable_candidate | read_only | transactions_behind | transactions_to_cert |
+------------------+-----------+---------------------+----------------------+
| YES | YES | 950 | 0 |
+------------------+-----------+---------------------+----------------------+
mysql-sql> UNLOCK TABLES;
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
121 / 152
123. MySQL Router
MySQL Router is lightweight middleware that provides transparent routing between your
application and backend MySQL Servers. It can be used for a wide variety of use cases,
such as providing high availability and scalability by effectively routing database traffic to
appropriate backend MySQL Servers.
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
123 / 152
124. MySQL Router
MySQL Router is lightweight middleware that provides transparent routing between your
application and backend MySQL Servers. It can be used for a wide variety of use cases,
such as providing high availability and scalability by effectively routing database traffic to
appropriate backend MySQL Servers.
MySQL Router doesn´t require any specific configuration. It configures itself automatically
(bootstrap) using MySQL InnoDB Cluster´s metadata.
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
124 / 152
125. LAB9: MySQL Router
We will nowuse mysqlrouter between our application and the cluster.
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
125 / 152
126. LAB9: MySQL Router (2)
Configure MySQL Router that will run on the app server (mysql1). We bootstrap it using
the Primary-Master:
[root@mysql1 ~]# mysqlrouter --bootstrap mysql3:3306 --user mysqlrouter
Please enter MySQL password for root:
WARNING: The MySQL server does not have SSL ...
Bootstrapping system MySQL Router instance...
MySQL Router has now been con gured for the InnoDB cluster 'perconalive'.
The following connection information can be used to connect to the cluster.
Classic MySQL protocol connections to cluster 'perconalive':
- Read/Write Connections: localhost:6446
- Read/Only Connections: localhost:6447
X protocol connections to cluster 'perconalive':
- Read/Write Connections: localhost:64460
- Read/Only Connections: localhost:64470
[root@mysql1 ~]# chown -R mysqlrouter. /var/lib/mysqlrouter
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
126 / 152
127. LAB9: MySQL Router (3)
Nowlet´s modify the configuration file to listen on port 3306:
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
127 / 152
128. LAB9: MySQL Router (3)
Nowlet´s modify the configuration file to listen on port 3306:
in /etc/mysqlrouter/mysqlrouter.conf:
[routing:perconalive_default_rw]
-bind_port=6446
+bind_port=3306
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
128 / 152
129. LAB9: MySQL Router (3)
Nowlet´s modify the configuration file to listen on port 3306:
in /etc/mysqlrouter/mysqlrouter.conf:
[routing:perconalive_default_rw]
-bind_port=6446
+bind_port=3306
We can stop mysqld on mysql1 and start mysqlrouter into a screen session:
[mysql1 ~]# systemctl stop mysqld
[mysql1 ~]# systemctl start mysqlrouter
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
129 / 152
130. LAB9: MySQL Router (4)
Before killing a member we will change systemd´s default behavior that restarts
mysqld immediately:
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
130 / 152
131. LAB9: MySQL Router (4)
Before killing a member we will change systemd´s default behavior that restarts
mysqld immediately:
in /usr/lib/systemd/system/mysqld.service add the following under
[Service]
RestartSec=30
[mysql3 ~]# systemctl daemon-reload
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
131 / 152
132. LAB9: MySQL Router (5)
Nowwe can point the application to the router (back to mysql1):
[mysql1 ~]# run_app.sh
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
132 / 152
133. LAB9: MySQL Router (5)
Nowwe can point the application to the router (back to mysql1):
[mysql1 ~]# run_app.sh
Check app and kill mysqld on mysql3 (the Primary Master R/Wnode) !
[mysql3 ~]# kill -9 $(pidof mysqld)
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
133 / 152
134. LAB9: MySQL Router (5)
Nowwe can point the application to the router (back to mysql1):
[mysql1 ~]# run_app.sh
Check app and kill mysqld on mysql3 (the Primary Master R/Wnode) !
[mysql3 ~]# kill -9 $(pidof mysqld)
mysql2> select member_host
from performance_schema.replication_group_members
where member_role='PRIMARY';
+-------------+
| member_host |
+-------------+
| mysql4 |
+-------------+
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
134 / 152
135. ProxySQL / HA Proxy / F5 / ...
3rd party router/proxy
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
135 / 152
136. 3rd party router/proxy
MySQL InnoDB Cluster can also work with third party router / proxy.
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
136 / 152
137. 3rd party router/proxy
MySQL InnoDB Cluster can also work with third party router / proxy.
If you need some specific features that are not yet available in MySQL Router, like
transparent R/Wsplitting, then you can use your software of choice.
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
137 / 152
138. 3rd party router/proxy
MySQL InnoDB Cluster can also work with third party router / proxy.
If you need some specific features that are not yet available in MySQL Router, like
transparent R/Wsplitting, then you can use your software of choice.
The important part of such implementation is to use a good health check to verify if the
MySQL server you plan to route the traffic is in a valid state.
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
138 / 152
139. 3rd party router/proxy
MySQL InnoDB Cluster can also work with third party router / proxy.
If you need some specific features that are not yet available in MySQL Router, like
transparent R/Wsplitting, then you can use your software of choice.
The important part of such implementation is to use a good health check to verify if the
MySQL server you plan to route the traffic is in a valid state.
MySQL Router implements that natively, and it´s very easy to deploy.
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
139 / 152
140. ProxySQL also has native support for Group
Replication which makes it maybe the best
choice for advanced users.
3rd party router/proxy
MySQL InnoDB Cluster can also work with third party router / proxy.
If you need some specific features that are not yet available in MySQL Router, like
transparent R/Wsplitting, then you can use your software of choice.
The important part of such implementation is to use a good health check to verify if the
MySQL server you plan to route the traffic is in a valid state.
MySQL Router implements that natively, and it´s very easy to deploy.
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
140 / 152
142. Recovering Nodes/Members
The old master (mysql3) got killed.
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
142 / 152
143. Recovering Nodes/Members
The old master (mysql3) got killed.
MySQL got restarted automatically by systemd
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
143 / 152
144. Recovering Nodes/Members
The old master (mysql3) got killed.
MySQL got restarted automatically by systemd
Let´s add mysql3 back to the cluster
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
144 / 152
145. LAB10: Recovering Nodes/Members
[mysql3 ~]# mysqlsh
mysql-js> c root@mysql4:3306 # The current master
mysql-js> cluster = dba.getCluster()
mysql-js> cluster.status()
mysql-js> cluster.rejoinInstance("root@mysql3:3306")
Rejoining the instance to the InnoDB cluster. Depending on the original
problem that made the instance unavailable, the rejoin operation might not be
successful and further manual steps will be needed to x the underlying
problem.
Please monitor the output of the rejoin operation and take necessary action if
the instance cannot rejoin.
Please provide the password for 'root@mysql3:3306':
Rejoining instance to the cluster ...
The instance 'root@mysql3:3306' was successfully rejoined on the cluster.
The instance 'mysql3:3306' was successfully added to the MySQL Cluster.
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
145 / 152
146. mysql-js> cluster.status()
{ "clusterName": "perconalive",
"defaultReplicaSet": {
"name": "default",
"primary": "mysql4:3306",
"status": "OK",
"statusText": "Cluster is ONLINE and can tolerate up to ONE failure.",
"topology": {
"mysql2:3306": {
"address": "mysql2:3306",
"mode": "R/O",
"readReplicas": {},
"role": "HA",
"status": "ONLINE" },
"mysql3:3306": {
"address": "mysql3:3306",
"mode": "R/O",
"readReplicas": {},
"role": "HA",
"status": "ONLINE" },
"mysql4:3306": {
"address": "mysql4:3306",
"mode": "R/W",
"readReplicas": {},
"role": "HA",
"status": "ONLINE" }
}
}
}
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
146 / 152
147. Recovering Nodes/Members (automatically)
This time before killing a member of the group, we will persist the configuration on disk in
my.cnf.
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
147 / 152
148. Recovering Nodes/Members (automatically)
This time before killing a member of the group, we will persist the configuration on disk in
my.cnf.
We will use again the same MySQL command as previously
dba.con gureLocalInstance() but this time when all nodes are already part
of the Group.
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
148 / 152
149. LAB10: Recovering Nodes/Members (2)
Verify that all nodes are ONLINE.
...
mysql-js> cluster.status()
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
149 / 152
150. LAB10: Recovering Nodes/Members (2)
Verify that all nodes are ONLINE.
...
mysql-js> cluster.status()
Then on all nodes run:
mysql-js> dba.con gureLocalInstance()
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
150 / 152
151. LAB10: Recovering Nodes/Members (3)
Kill one node again:
[mysql3 ~]# kill -9 $(pidof mysqld)
systemd will restart mysqld and verify if the node joined.
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
151 / 152
152. Thank you !
Any Questions ?
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
152 / 152