Migrating on premise data from Oracle and MySQL Databases to AWS Oracle and MySQL RDS. These techniques will work for AWS EC2 as well. Scripts included in the slides.
2. Next 60 minutes …
•
•
•
•
•
What is new in RDS
Types of Data Migration
General Considerations
Advanced migration techniques for Oracle
Near zero downtime migration for MySQL
3. RDS Recent Releases
Oracle Transparent Data Encryption
MySQL 5.6
Amazon
RDS
MySQL Replication to RDS
CR1.8XLarge for MySQL 5.6
Oracle Statspack
Cross-region Snapshot Copy
8. RDS Pre-migration Steps
•
•
•
•
•
•
Stop applications accessing the DB
Take a snapshot
Disable backups
Use Single AZ instances
Optimum instance for load performance
Configure security for cross-DB traffic
20. Upload files to EC2 using UDP
Install Tsunami on both the source database server and the EC2 instance
Open port 46224 for Tsunami communication
$ yum -y install make
yum -y install automake
yum -y install gcc
yum -y install autoconf
yum -y install cvs
wget http://sourceforge.net/projects/tsunami-udp/files/latest/download?_test=goal
tar -xzf tsunami*gz
cd tsunami-udp*
./recompile.sh
make install
21. Using UDP tool Tsunami
On the source database server start Tsunami server
$ cd/mnt/expdisk1
$ tsunamid *
On the source database server start Tsunami server
$ cd /mnt/data_files
$ tsunami
$ tsunami> connect source.db.server
$ tsunami> get *
22. Export and Upload in parallel
• No need to wait till all 18 files are done to start upload
• Start upload as soon as the first set of 3 files are done
24. Transfer files to RDS instance
RDS instance has an externally accessible Oracle Directory Object
DATA_PUMP_DIR
Use a script to move data files to RDS DATA_PUMP_DIR
25. Perl script to transfer files to RDS instance
# RDS instance info
my $RDS_PORT=4080;
my $RDS_HOST="myrdshost.xxx.us-east-1-devo.rds-dev.amazonaws.com";
my $RDS_LOGIN="orauser/orapwd";
my $RDS_SID="myoradb";
my $dirname = "DATA_PUMP_DIR";
my $fname
= $ARGV[0];
my $data = “dummy";
my $chunk = 8192;
my $sql_open
= "BEGIN perl_global.fh := utl_file.fopen(:dirname, :fname, 'wb', :chunk); END;";
my $sql_write = "BEGIN utl_file.put_raw(perl_global.fh, :data, true); END;";
my $sql_close = "BEGIN utl_file.fclose(perl_global.fh); END;";
my $sql_global = "create or replace package perl_global as fh utl_file.file_type; end;";
my $conn = DBI->connect('dbi:Oracle:host='.$RDS_HOST.';sid='.$RDS_SID.';port='.$RDS_PORT,$RDS_LOGIN, '') ||
die ( $DBI::errstr . "n") ;
my $updated=$conn->do($sql_global);
my $stmt = $conn->prepare ($sql_open);
26. Perl script to transfer files to RDS instance
$stmt->bind_param_inout(":dirname", $dirname, 12);
$stmt->bind_param_inout(":fname", $fname, 12);
$stmt->bind_param_inout(":chunk", $chunk, 4);
$stmt->execute() || die ( $DBI::errstr . "n");
open (INF, $fname) || die "nCan't open $fname for reading: $!n";
binmode(INF);
$stmt = $conn->prepare ($sql_write);
my %attrib = ('ora_type,24);
my $val=1;
while ($val > 0) {
$val = read (INF, $data, $chunk);
$stmt->bind_param(":data", $data , %attrib);
$stmt->execute() || die ( $DBI::errstr . "n") ; };
die "Problem copying: $!n" if $!;
close INF || die "Can't close $fname: $!n";
$stmt = $conn->prepare ($sql_close);
$stmt->execute() || die ( $DBI::errstr . "n") ;
27. Transfer files as they are received
• No need to wait till all 18 files are received in the EC2 instance
• Start transfer to RDS instance as soon as the first file is
received.
33. Optimize the Data Pump Export
• Reduce the data set to optimal size, avoid
indexes
• Use compression and Parallel processing
• Use multiple disks with independent IO
34. Optimize Data Upload
•
•
•
•
Use Tsunami for UDP based file transfer
Use large EC2 instance with SSD or PIOPS volume
Use multiple disks with independent IO
You could use multiple EC2 instances for parallel upload
35. Optimize Data File upload to RDS
• Use the largest RDS instance possible during the import
process
• Avoid using RDS instance for any other load during this
time
• Provision enough storage in the RDS instance for the
uploaded files and imported data
39. Importing From a MySQL DB Instance
Application
DB
Application
mysqldump
Staging
area
Load data
scp
Tsunami UDP
Staging server
Replication
AWS Region
44. Create RDS for MySQL and EC2
Create RDS for MySQL using AWS Management Console or CLI
PROMPT>rds-create-db-instance mydbinstance -s 1024 -c db.m3.2xlarge -e MySQL - u
<masterawsuser> -p <secretpassword> --backup-retention-period 3
Create EC2 (Staging server) using AWS Management Console or CLI
aws ec2 run-instances --image-id ami-xxxxxxxx --count 1 --instance-type m3.2xlarge --key-name
MyKeyPair --security-groups MySecurityGroup
Create Replication User on the Master
mysql> GRANT SELECT,REPLICATION USER,REPLICATION CLIENT ON *.* TO
repluser@„<RDS Endpoint>' IDENTIFIED BY „<password>';
45. Update /etc/my.cnf on the master server
Enable MySQL binlog
This enables bin logging which creates a file recording the changes that have
occurred on the Master which the Slave uses to replicate the data.
[mysqld]
server-id = 1
binlog-do-db=mytest
relay-log = /var/lib/mysql/mysql-relay-bin
relay-log-index = /var/lib/mysql/mysql-relay-bin.index
log-error = /var/lib/mysql/mysql.err
master-info-file = /var/lib/mysql/mysql-master.info
relay-log-info-file = /var/lib/mysql/mysql-relay-log.info
log-bin = /var/lib/mysql/mysql-bin
46. Configure the master database
Restart the master database after /etc/my.cnf is updated
$ sudo /etc/init.d/mysqld start
Record the “File” and the “Position” values.
$ mysql -h localhost -u root -p
mysql> show master statusG
*************************** 1. row ***************************
File: mysql-bin.000023
Position: 107
Binlog_Do_DB: mytest
Binlog_Ignore_DB:
1 row in set (0.00 sec)
47. Upload files to EC2 using UDP
• Tar and compress MySQL dump file preparation
to ship to EC2 staging server.
• Update the EC2 security group to allow UDP
connection from the server where the dump files
being created to your new mysql client server.
• On the EC2 staging instance untar the tar.tgz
file.
48. Configure the RDS database
Create the database
mysql> create database bench;
Import the database that you previously exported from the master database.
Mysql> load data local infile '/reinvent/tables/customer_address.txt' into table customer_address
fields terminated by ',';
Mysql> load data local infile '/reinvent/tables/customer.txt' into table customer fields terminated by ',';
Configure the slave RDS for MySQL server and start the slave server
mysql> call mysql.rds_set_external_master(„<master
server>',3306,„<replicationuser>',„<password>','mysql-bin.000013',107,0);
mysql> call mysql.rds_start_replication;
50. Switch RDS MySQL to the Master
Switch-over the RDS for MySQL
– Stop the service/application that is pointing at the Master
Database
– Once all changes have been applied to New RDS
Database. Stop replication with “call mysql.rds_stop_replication”
– Point the service/application at the New RDS Database.
– Once Migration is complete. “call mysql.
rds_reset_external_master”
51. Please give us your feedback on this
presentation
DAT308
As a thank you, we will select prize
winners daily for completed surveys!
migrate a 150 GB database to RDS MySQL from on-premise MySQL database with greatly reduced downtime using replication. The idea is to restore a fresh backup to RDS MySQL database, turn on replication on the new server after configuring the old server as the Master for the data to be in sync, switch the Slave to become the Master, then point the DNS to the new server all without downtime.
Replication between servers in MySQL is based on the binary logging mechanism. The MySQL instance operating as the master (the source of the database changes) writes updates and changes as “events” to the binary log. The information in the binary log is stored in different logging formats according to the database changes being recorded. Slaves are configured to read the binary log from the master and to execute the events in the binary log on the slave's local database.
MYSQL.RDS_SET_EXTERNAL_MASTER procedure designates the RDS server as the slave of our master server,It provides the server the correct login credentials, it lets the slave server know where to start replicating from; the master log file and log position come from the numbers we wrote down previously.
AWS console will show replicating. New RDS replication is caught up with the source instanceAt this point you may want to take a user snapshot.