1. MySQL Cluster – Quick Start & Scaling for the Web Henrique Leandro (henrique.leandro@oracle.com) Engenheiro MySQL Marcelo Souza (marcelo.t.souza@oracle.com) MySQL Brasil
2.
3. Open-source powers the Web & the Network MySQL: Serving Key Markets & Industry Leaders Enterprise 2.0 Telecommunications On Demand, SaaS, Hosting Web / Web 2.0 OEM / ISV's
4.
5.
6.
7. MySQL Cluster Data Nodes MySQL Cluster Application Nodes MySQL Cluster Mgmt Clients MySQL Cluster Architecture Bancos Paralelos sem SPOF: Alta Leitura & Performance na Escrita & 99.999% uptime MySQL Cluster Mgmt
8. Cenários que vamos seguir: Criar uma simples, single-node Cluster no inicio do Desenvolvimento Extender para Configuração HA para Homologação Dimensionar Cluster on-line apartir da demanda de suas aplicações Data Node 192.168.0.31 MySQL Server Mgmt Node Data Node Data Node Data Node mysqld Mgmt Node mysqld Mgmt Node 31 32 33 34 Data Node Data Node mysqld mysqld 31 32 33 34 Data Node Data Node 35 36 mysqld Mgmt Node mysqld Mgmt Node
11. Download ultima versão GA OS/architecture http://www.mysql.com/downloads/cluster/#downloads
12. Instale software e configure diretorios 192.168.0.31 $ tar xvf Downloads/mysql-cluster-gpl-7.1.4b-linux-i686-glibc23.tar.gz $ ln -s mysql-cluster-gpl-7.1.4b-linux-i686-glibc23/ mysqlc $ mkdir my_cluster my_cluster/ndb_data my_cluster/mysqld_data my_cluster/conf Data Node 192.168.0.31 MySQL Server Mgmt Node Data Node
13. Criar arquivos de configuração (para MySQL Server) 192.168.0.31 $ vi my_cluster/conf/my.cnf [mysqld] ndbcluster datadir=/home/user1/my_cluster/mysqld_data basedir=/home/user1/mysqlc port=5000 Data Node 192.168.0.31 MySQL Server Mgmt Node Data Node
14. Criar arquivo de configuração (para Cluster) 192.168.0.31 $ vi my_cluster/conf/config.ini [ndb_mgmd] hostname=localhost datadir=/home/user1/my_cluster/ndb_data id=1 [ndbd default] noofreplicas=2 datadir=/home/user1/my_cluster/ndb_data [ndbd] hostname=localhost id=3 [ndbd] hostname=localhost id=4 [mysqld] id=50 Data Node 192.168.0.31 MySQL Server Mgmt Node Data Node
15. Criar o banco “mysql” 192.168.0.31 $ cd mysqlc [mysqlc]$ scripts/mysql_install_db --no-defaults --datadir=$HOME/my_cluster/mysqld_data/ Data Node 192.168.0.31 MySQL Server Mgmt Node Data Node
16. Inicie os nós de gerenciamento e então os nós de dados 192.168.0.31 $ cd ~/my_cluster [my_cluster]$ $HOME/mysqlc/bin/ndb_mgmd -f conf/config.ini --initial --configdir=$HOME/my_cluster/conf/ [my_cluster]$ $HOME/mysqlc/bin/ndbd -c localhost:1186 [my_cluster]$ $HOME/mysqlc/bin/ndbd -c localhost:1186 Data Node 192.168.0.31 MySQL Server Mgmt Node Data Node
17. Certifique que nós estão online antes de iniciar o MySQL Server 192.168.0.31 $ $HOME/mysqlc/bin/ndb_mgm -e show Connected to Management Server at: localhost:1186 Cluster Configuration --------------------- [ndbd(NDB)] 2 node(s) id=3 @127.0.0.1 (mysql-5.1.44 ndb-7.1.3, Nodegroup: 0, Master) id=4 @127.0.0.1 (mysql-5.1.44 ndb-7.1.3, Nodegroup: 0) [ndb_mgmd(MGM)] 1 node(s) id=1 @127.0.0.1 (mysql-5.1.44 ndb-7.1.3) [mysqld(API)] 1 node(s) id=50 (not connected, accepting connect from any host) Data Node 192.168.0.31 MySQL Server Mgmt Node Data Node
18. Inicie o MySQL Server e teste o Banco 192.168.0.31 [my_cluster]$ $HOME/mysqlc/bin/mysqld --defaults-file=conf/my.cnf & [my_cluster]$ $HOME/mysqlc/bin/mysql -h 127.0.0.1 -P 5000 -u root mysql> create database clusterdb;use clusterdb; mysql> create table simples (id int not null primary key) engine=ndb; mysql> insert into simples values (1),(2),(3),(4); mysql> select * from simples; +----+ | id | +----+ | 1 | | 2 | | 4 | | 3 | +----+ Data Node 192.168.0.31 MySQL Server Mgmt Node Data Node
19.
20. Extender o Cluster para Alta Disponibilidade Data Node mysqld Mgmt Node Data Node Data Node Data Node mysqld Mgmt Node mysqld Mgmt Node 31 31 32 33 34
21. Instalar s/w & criar diretorios para 192.168.0.33/34 192.168.0.34 $ tar xvf Downloads/mysql-cluster-gpl-7.1.4b-linux-i686-glibc23.tar.gz $ ln -s mysql-cluster-gpl-7.1.4b-linux-i686-glibc23/ mysqlc $ mkdir my_cluster my_cluster/mysqld_data my_cluster/conf Data Node mysqld Mgmt Node Data Node Data Node Data Node mysqld Mgmt Node mysqld Mgmt Node 31 31 32 33 34 192.168.0.33 $ tar xvf Downloads/mysql-cluster-gpl-7.1.4b-linux-i686-glibc23.tar.gz $ ln -s mysql-cluster-gpl-7.1.4b-linux-i686-glibc23/ mysqlc $ mkdir my_cluster my_cluster/mysqld_data my_cluster/conf
22. Instalar s/w & criar diretorios para 192.168.0.32 Data Node mysqld Mgmt Node Data Node Data Node Data Node mysqld Mgmt Node mysqld Mgmt Node 31 31 32 33 34 192.168.0.32 $ tar xvf Downloads/mysql-cluster-gpl-7.1.4b-linux-i686-glibc23.tar.gz $ ln -s mysql-cluster-gpl-7.1.4b-linux-i686-glibc23/ mysqlc $ mkdir my_cluster my_cluster/ndb_data
23. Iniciar o “mysql” (para processos mysqld) 192.168.0.33 $ cd mysqlc [mysqlc] $ scripts/mysql_install_db --no-defaults --datadir=$HOME/my_cluster/mysqld_data/ 192.168.0.34 $ cd mysqlc [mysqlc] $ scripts/mysql_install_db --no-defaults --datadir=$HOME/my_cluster/mysqld_data/ Data Node mysqld Mgmt Node Data Node Data Node Data Node mysqld Mgmt Node mysqld Mgmt Node 31 31 32 33 34
24. Criar o Cluster config files Data Node mysqld Mgmt Node Data Node Data Node Data Node mysqld Mgmt Node mysqld Mgmt Node 31 31 32 33 34 192.168.0.33 $ cd ~/my_cluster/conf/ [conf] $ vi config.ini [ndb_mgmd] hostname=192.168.0.33 datadir=/home/user1/my_cluster/ndb_data id=1 [ndb_mgmd] hostname=192.168.0.34 datadir=/home/user1/my_cluster/ndb_data id=2
25. Criar o Cluster config files (continued) Data Node mysqld Mgmt Node Data Node Data Node Data Node mysqld Mgmt Node mysqld Mgmt Node 31 31 32 33 34 192.168.0.33 ... [ndbd default] noofreplicas=2 datadir=/home/user1/my_cluster/ndb_data [ndbd] hostname=192.168.0.31 id=3 [ndbd] hostname=192.168.0.32 id=4 [mysqld] id=50 [mysqld] id=51
26. Criar o Cluster config files (continued) Data Node mysqld Mgmt Node Data Node Data Node Data Node mysqld Mgmt Node mysqld Mgmt Node 31 31 32 33 34 192.168.0.33 [conf] $ vi my.cnf [mysqld] ndbcluster datadir=/home/user1/my_cluster/mysqld_data basedir=/home/user1/mysqlc port=5000 [conf] $ sftp 192.168.0.34 sftp> cd my_cluster/conf sftp> put config.ini sftp> put my.cnf sftp> bye
27. Faça um backup e faça um shut-down no Cluster 192.168.0.31 $ $HOME/mysqlc/bin/mysqladmin -u root -h 127.0.0.1 -P 5000 shutdown $ $HOME/mysqlc/bin/ndb_mgm –e “START BACKUP” $ $HOME/mysqlc/bin/ndb_mgm –e “SHUTDOWN” Data Node mysqld Mgmt Node Data Node Data Node Data Node mysqld Mgmt Node mysqld Mgmt Node 31 31 32 33 34
28. Iniciar os nós de gerenciamentos Data Node mysqld Mgmt Node Data Node Data Node Data Node mysqld Mgmt Node mysqld Mgmt Node 31 31 32 33 34 192.168.0.33 $ cd ~/my_cluster [my_cluster] $ $HOME/mysqlc/bin/ndb_mgmd -f conf/config.ini --initial --configdir=$HOME/my_cluster/conf/ 192.168.0.34 $ cd ~/my_cluster [my_cluster] $ $HOME/mysqlc/bin/ndb_mgmd -f conf/config.ini --initial --configdir=$HOME/my_cluster/conf/
29. Iniciar os Nós de Dados Data Node mysqld Mgmt Node Data Node Data Node Data Node mysqld Mgmt Node mysqld Mgmt Node 31 31 32 33 34 192.168.0.31 $ $HOME/mysqlc/bin/ndbd -c 192.168.0.33:1186,192.168.0.34:1186 --initial 192.168.0.32 $ $HOME/mysqlc/bin/ndbd -c 192.168.0.33:1186,192.168.0.34:1186 --initial
30. Esperar até que os nós estejam prontos Data Node mysqld Mgmt Node Data Node Data Node Data Node mysqld Mgmt Node mysqld Mgmt Node 31 31 32 33 34 192.168.0.33 $ $HOME/mysqlc/bin/ndb_mgm -e "SHOW“ Connected to Management Server at: localhost:1186 Cluster Configuration --------------------- [ndbd(NDB)] 2 node(s) id=3 @192.168.0.31 (mysql-5.1.44 ndb-7.1.4, Nodegroup: 0, Master) id=4 @192.168.0.32 (mysql-5.1.44 ndb-7.1.4, Nodegroup: 0) [ndb_mgmd(MGM)] 2 node(s) id=1 @192.168.0.33 (mysql-5.1.44 ndb-7.1.4) id=2 @192.168.0.34 (mysql-5.1.44 ndb-7.1.4) [mysqld(API)] 2 node(s) id=50 (not connected, accepting connect from any host) id=51 (not connected, accepting connect from any host)
32. Start up MySQL Servers Data Node mysqld Mgmt Node Data Node Data Node Data Node mysqld Mgmt Node mysqld Mgmt Node 31 31 32 33 34 192.168.0.33 $ cd ~/my_cluster [my_cluster] $ $HOME/mysqlc/bin/mysqld --defaults-file=conf/my.cnf & 192.168.0.34 $ cd ~/my_cluster [my_cluster] $ $HOME/mysqlc/bin/mysqld --defaults-file=conf/my.cnf &
33. Verificar se os dados continuam lá Data Node mysqld Mgmt Node Data Node Data Node Data Node mysqld Mgmt Node mysqld Mgmt Node 31 31 32 33 34 192.168.0.33 $ $HOME/mysqlc/bin/mysql -h 127.0.0.1 -P 5000 -u root mysql> create database clusterdb;use clusterdb; mysql> select * from simples; +----+ | id | +----+ | 3 | | 1 | | 2 | | 4 | +----+
34.
35. Adicionando Nós On-line Data Node Data Node mysqld Mgmt Node mysqld Mgmt Node 31 32 33 34 Data Node Data Node mysqld mysqld 31 32 33 34 Data Node Data Node 35 36 mysqld Mgmt Node mysqld Mgmt Node
36. Instalar s/w & criar diretorios para 192.168.0.33/34 192.168.0.35 $ tar xvf Downloads/mysql-cluster-gpl-7.1.4b-linux-i686-glibc23.tar.gz $ ln -s mysql-cluster-gpl-7.1.4b-linux-i686-glibc23/ mysqlc $ mkdir my_cluster my_cluster/ndb_data 192.168.0.36 $ tar xvf Downloads/mysql-cluster-gpl-7.1.4b-linux-i686-glibc23.tar.gz $ ln -s mysql-cluster-gpl-7.1.4b-linux-i686-glibc23/ mysqlc $ mkdir my_cluster my_cluster/ndb_data Data Node Data Node mysqld Mgmt Node mysqld Mgmt Node 31 32 33 34 Data Node Data Node mysqld mysqld 31 32 33 34 Data Node Data Node 35 36 mysqld Mgmt Node mysqld Mgmt Node
37. Iniciar o “mysql” (mysqld) 192.168.0.33 [my_cluster] $ mkdir mysqld_data2 [my_cluster] $ cd ../mysqlc [mysqlc] $ scripts/mysql_install_db --no-defaults --datadir=$HOME/my_cluster/mysqld_data2/ [mysqlc] $ cd ../my_cluster 192.168.0.34 [my_cluster] $ mkdir mysqld_data2 [my_cluster] $ cd ../mysqlc [mysqlc] $ scripts/mysql_install_db --no-defaults --datadir=$HOME/my_cluster/mysqld_data2/ [mysqlc] $ cd ../my_cluster Data Node Data Node mysqld Mgmt Node mysqld Mgmt Node 31 32 33 34 Data Node Data Node mysqld mysqld 31 32 33 34 Data Node Data Node 35 36 mysqld Mgmt Node mysqld Mgmt Node
38. Altere o Cluster config.ini Data Node Data Node mysqld Mgmt Node mysqld Mgmt Node 31 32 33 34 Data Node Data Node mysqld mysqld 31 32 33 34 Data Node Data Node 35 36 mysqld Mgmt Node mysqld Mgmt Node 192.168.0.33 $ cd ~/my_cluster/conf/ [conf] $ vi config.ini ... [ndbd] hostname=192.168.0.35 id=5 [ndbd] hostname=192.168.0.36 id=6 [mysqld] id=52 [mysqld] id=53 ....
39. Criar config file para os novos servidores MySQL Data Node Data Node mysqld Mgmt Node mysqld Mgmt Node 31 32 33 34 Data Node Data Node mysqld mysqld 31 32 33 34 Data Node Data Node 35 36 mysqld Mgmt Node mysqld Mgmt Node 192.168.0.33 [conf] $ vi my2.cnf [mysqld] ndbcluster datadir=/home/user1/my_cluster/mysqld_data2 basedir=/home/user1/mysqlc port=5001
40. Duplicar config files para outros nós de gerenciamento Data Node Data Node mysqld Mgmt Node mysqld Mgmt Node 31 32 33 34 Data Node Data Node mysqld mysqld 31 32 33 34 Data Node Data Node 35 36 mysqld Mgmt Node mysqld Mgmt Node 192.168.0.33 [conf] $ sftp 192.168.0.34 sftp> cd my_cluster/conf sftp> put config.ini sftp> put my2.cnf sftp> bye
41. Restart Nós de gerenciamento Data Node Data Node mysqld Mgmt Node mysqld Mgmt Node 31 32 33 34 Data Node Data Node mysqld mysqld 31 32 33 34 Data Node Data Node 35 36 mysqld Mgmt Node mysqld Mgmt Node 192.168.0.33 [conf] $ cd .. [my_cluster] $ $HOME/mysqlc/bin/ndb_mgm -e “2 STOP“ [my_cluster] $ $HOME/mysqlc/bin/ndb_mgm -e “1 STOP“ [my_cluster] $ $HOME/mysqlc/bin/ndb_mgmd -f conf/config.ini --initial --configdir=$HOME/my_cluster/conf/ 192.168.0.34 [my_cluster] $ $HOME/mysqlc/bin/ndb_mgmd -f conf/config.ini --initial --configdir=$HOME/my_cluster/conf/
42. Restart Nós de dados existentes Data Node Data Node mysqld Mgmt Node mysqld Mgmt Node 31 32 33 34 Data Node Data Node mysqld mysqld 31 32 33 34 Data Node Data Node 35 36 mysqld Mgmt Node mysqld Mgmt Node 192.168.0.33 [my_cluster] $ $HOME/mysqlc/bin/ndb_mgm -e “3 RESTART“ # Wait until Node 3 has restarted [my_cluster] $ $HOME/mysqlc/bin/ndb_mgm -e “SHOW“ [ndbd(NDB)] 4 node(s) id=3 @192.168.0.31 (mysql-5.1.44 ndb-7.1.4, Nodegroup: 0) id=4 @192.168.0.32 (mysql-5.1.44 ndb-7.1.4, Nodegroup: 0, Master) id=5 (not connected, accepting connect from 192.168.0.35) id=6 (not connected, accepting connect from 192.168.0.36) [ndb_mgmd(MGM)] 2 node(s) id=1 @192.168.0.33 (mysql-5.1.44 ndb-7.1.4) id=2 @192.168.0.34 (mysql-5.1.44 ndb-7.1.4) [mysqld(API)] 4 node(s) id=50 @192.168.0.34 (mysql-5.1.44 ndb-7.1.4) id=51 @192.168.0.33 (mysql-5.1.44 ndb-7.1.4) id=52 (not connected, accepting connect from any host) id=53 (not connected, accepting connect from any host) [my_cluster] $ $HOME/mysqlc/bin/ndb_mgm -e “4 RESTART“
Thx for joining todays webinar: present How to get started with MySQL Cluster, and then how to scale deployment from evaluation thru to production Intro me and Andrew Housekeeping rules: Submit questions on-line via questiob box on right. We’ll answer these over webinar Webinar recorded, replay sent out in a few days+slides
Divided into 2 sections: Getting Started with MySQL Cluster, so how to get the s/w, instll, cofigure, run, etc Scale for production Review requirements for HA & Scaling Extend & deploy the configuration to multiple hosts Dynamically scale without disrupting the Cluster
Before that, quicj overview of MySQL. Worlds most popular open source database – 150m downloads, 12m active installation MySQL's user base — centered in five core areas comprised Web 2.0, SaaS, enterprise, ,OEM/embedded and telco – it is those last 2 that have been the largest users to date of MySQL Cluster to date – but we also see rapid adoption of Cluster in web and eCommerce appls,
Design goals for MySQL Cluster: - everything we do in dev of the product is designed to enhance 1 or more of these core design goals: High Perf, specifically write scalability: how do we deliver without app developers having to modify their apps. Other dimension is low latency, to deliver real time responsivenss - 5 x 9s avail – handle both scheduled maintenace and failiers, so planned and unplanned downtime with less than 5 mins downtime per year - Low TCO –acquistion and operation of the s/w, as well as optimising perf and avail on commodity h/w, so keep overall projext costs down
So, look at how we deliver against those goals - distributed hash table backed by an ACID relational model - As name suggests, MySQL Cluster comprises multiple nodes which act as a single system, implemented as shared-nothing architecture, scale out on commodity hardware - implemented as a pluggable storage engine for the MySQL Server, like InnoDB or MyISAM – so gives you ease-of-use and ubiquity of MySQL, with additional direct access via embedded APIs, so can eliminate SQL transofrmations completely and manage data directly from your app – C++, LDAP, HTTP, most recently, Java and OpenJPA. This boosts perf also enables devs to work in their prefeered dev environments accelate dev cycles. - automatic or user configurable data partitioning across nodes, MySQL Cluster handles this, no need to partition within the apps - synchronous data redundancy across nodes, using 2PC. Can be turned off, but default and recommendation is for it to be on - Because shared nothingh & sync repli, we get sub-second fail-over. System also designed for self-healing recovery, so a fail;ed node will automatically rejoin and re-sync the cluster - geographic replication, for DR - data stored in main-memory or on disk (configurable per-column) - logging and check pointing of in-memory data to disk, so durability – perfomed as a background process, so eliminate I/O waits - online operations (i.e. add-nodes, schema updates, maintenance, etc), no downtime to apps or clients
Look at types of workload Cluster deployed into and the users Cluster technology originally developed bu E///, used purely as an in-mem, carrier-grade db embedded in network equpt – typically switches. Acquired by MySQL in 2003. Acquired not just the technology, but also the engineering team who have contined to develop the product rapidy in subsequent years, ie added disk based tables, added SQL i/f, automtic node recovery, add Geo-Repl, open sourced. Very strong in telecoms in Sub DBs (HLR/HSS)– truly mission critical apps, also in app servers, VAS In web workloads, used a lot for session stores, eCommerce, Mgmt of user profiles
Here we show the architectire of MySQL Cluster Data is distributed across multiple nodes, so you have multi-master db with parallel architecture so perform multiple write ops concurrently, with any changes instantly available to all clients accessing the cluster There are no SPOFs, it is shared nothing arch, so get 5x9s uptime 3 core elements Data Nodes: handle actual storage and access to data from your application – data is distributed across the data nodes – automatically partitioned, replication, failover and self healing. Don't need complex logic in the application – data nodes handle all of that Application Nodes – access data. Either SQL nodes running MySQL. Std SQL i/f that developers program against. Also have a series of native interfaces to directly access the data from within an appliation – bypasses SQL – gives highest performance, lowest latency: C++ API, Java API and OpenJPA plug-in for Object/Relational Mapping. Can also access via LDAP servers and HTTP with an Apache module Mgmt Nodes: used at start-up, used to add nodes, to reconfigure the cluster, arbitration if there is a network failure – avoid split brain as determine which side of the cluster assumes ownership of servicing requests