You are here

MySQL Replication

Submitted by cafuego on 5 June, 2008 - 14:50

I found myself with some spare time the other day and decided that my current mysql backup strategy is not the best in the world. The mysql server is a virtual machine in a Brisbane datacenter and it's backed up via a script that calls mysqldump on each installed database and dumps the content to (compressed) files. These files then get sucked down to Melbourne via rdiff-backup.

This is fine in principle, but does mean it's possible for me to lose 24 hours worth of data due to an accidental 'DROP table' query.

A more ideal way would be for the remote sql server to replicate to a local one, on which I can run mysqldump more often without affecting web site performance. (Replication would replicate the DROP table statement too, so I still need dumps :-)

With a bit of a confluence of attending three days of OpenQuery mysql training and needing to regenerate all ssl keys, I thought I should create just this replicating master/slave setup, and have a local server replicating via ssl. The trickiest part, when replicating a server that already contains several years worth of data, is making sure the slave is consistent.

Step 1: Master configuration 

So the first thing to do is alert your clients that their web sites will be offline momentarily whilst you generate a coherent snapshot. Then you prepare the server. The default setup on Debian won't do replication out of the box, so some minor twiddling is needed.

So that it can keep track of which host has executed which query, you need to assign a unique server-id to each machine in your replicating setup. To ensure this uniqueness, I find it easiest to simply use the IP address with the dots removed:

server-id = 10100200

 Another required item is the binary log, which is what the slave will replay. No binlog, no queries to replay, no replication. So we tell the master where we want it to live: 

log_bin =  /var/log/mysql/mysql-bin.log

Note that this essentially is a prefix and that mysql will create a new binlog file once the current one reached the size limit as defined in max_binlog_size (Mine is set to 100M). You can define how long these logs need to be kept for via the expire_logs_days setting. Mine is 10 days.

If replication breaks and you don't notice it for longer than these 10 days, you will not be able to restore replication without recyncing the databases first!

You probably don't want to replicate all databases to the slave(s) and you can tell the master which databases to not write any data to the binlog for:

binlog_ignore_db = mysql
binlog_ignore_db = test

You need to repeat this line for each additional database you don't want logged to the binlog. I don't want the mysql database logged because that would break the privs on my slaves. Instead, I backup this database by other means. It doesn't change very often, so I'm unlikely to lose anything vital if I had to restore it with a 24h old copy.

Because you don't want third parties to be able to grab all your SQL data or login details, you will need to set up SSL, so you can securely replicate and dump data remotely. Either generate a csr and buy a commercial certificate, or set up a local CA (Use /usr/lib/ssl/misc/CA.pl in Debian/Ubuntu) and generate your own. Install them on the server and add them to the mysqld config file:

ssl-ca=/etc/ssl/certs/ca.pem
ssl-cert=/etc/ssl/certs/sql.pem
ssl-key=/etc/ssl/private/sql.key

Note: Make sure the mysql server can read the key file.

To complete the server side of things you need to create a mysql user that can replicate data and needs to use SSL to login:

mysql> GRANT REPLICATION SLAVE, REPLICATION CLIENT ON *.*
TO replicationuser@slavehost IDENTIFIED BY 'replicationpass' REQUIRE SSL;

Now reload or restart the server, so it re-reads its config file.

Step 2: Exporting the data 

Generating the snapshot is something you could do this in several ways, but because I already had SSL connectivity set up I decided to simply run mysqldump on each of the remote databases. With a text file containing a list of the databases you need, it's trivial to use a loop in a shell script to dump the data. However, you need to make sure the databases do not get written to during this period.

You can read lock all of them, but keep in mind that will probably still break many web sites. I turned off all services that require mysql for the duration of the export. Just make sure you schedule this to occur late at night, and your clients and/or users will be fine with it. At the end of the day, they benefit the most from having their data backed up to the second :-)

So, first lock all tables in all databases that we want to export. I used bash and my list to generate an SQL command file that I could easily execute: 

for db in $(cat databases.txt); do
    echo "USE ${db}; FLUSH TABLES WITH READ LOCK;" >> lock.sql;
done 

I started the mysql client with the -A option, so it does not cache table names for auto-completion at each database switch:

mysql> source lock.sql 

...and a short while later I essentially had a read-only server. The locks will remain in place as long as this client is connected to the server, so just leave this as-is for the moment.

Now I could re-use my list of databases for the export command. I ran this remotely, using ssl. It's probably a better idea to run this locally on the MySQL server and copy the dump files to the slave via rsync.

for db in $(cat databases.txt); do
    mysqldump -h remotemaster --ssl-ca=/etc/ssl/certs/ca.pem -uUSER -pPASS \
    --compress --quick --triggers --routines -r "${db}.sql" "${db}";
done

... and voila. Done. 

Step 3: Binlog info

We now have a consistent dump. All that remains is to find out where in the binlog we should be reading from if we want the slave to keep up to date. Find out the master status: 

	 mysql> SHOW MASTER STATUS\G
************ 1. row ************
            File: mysql-bin.000360
        Position: 15743168
    Binlog_Do_DB:
Binlog_Ignore_DB: mysql,test

This data we need to tell the slave from which point in the binlog it should start replication. You can now close the client that's keeping all tables locked and start any services that you had shut down for the duration of the export.

Step 4: Importing the data

To import your dumps on the slave, use another loop to create the databases and import the correct sql dump into each of them:

for db in  $(cat databases.txt); do
    mysql -u root -pROOTPASS -e "CREATE DATABASE ${db}";
    mysql -u root -pROOTPASS "${db}" < "${db}.sql";
done

Your slave now contains a copy of the data as it was on the master at the time you exported.

Step 5: Slave setup

In addition to the binlog_ignore_db directive on the master that prevents queries on the specified databases from being written to the binlog, we can specify databases (and tables) to igore on the slave as well. This would help you replicate specific databases to specific slaves only.

I add any local databases that I never want to be affected by whatever I replicate:

replicate-ignore-db = mysql
replicate-ignore-db = mythconverg
replicate-ignore-db = temp 

Again, you need to add a line for each database individually. You also need to add a server-id to the slave configration. Then restart the slave, so these changes take effect before you start replication.

All that remains now is to tell the slave about the master. This is done via the 'CHANGE MASTER' command:

mysql> CHANGE MASTER TO MASTER_HOST='masterserver', MASTER_USER='replicationuser', MASTER_PASSWORD='replicationpass', MASTER_LOG_FILE='mysql-bin.000360', MASTER_LOG_POS=2920478715743168, MASTER_SSL_CA='/etc/ssl/certs/ca.pem';

This will write the master info to the master.info file in your mysql datadir. Now start the replication threads on the slave:

mysql> START SLAVE;

and make sure everything is working:

mysql> SHOW SLAVE STATUS\G
************ 1. row ************
             Slave_IO_State: Waiting for master to send event
                Master_Host: masterserver
               Master_User: replicationuser
               Master_Port: 3306
              Connect_Retry: 60
            Master_Log_File: mysql-bin.000360
        Read_Master_Log_Pos: 16250997
             Relay_Log_File: mysql-bin.000325
              Relay_Log_Pos: 10831279
      Relay_Master_Log_File: mysql-bin.000360
           Slave_IO_Running: Yes
          Slave_SQL_Running: Yes
            Replicate_Do_DB: 
        Replicate_Ignore_DB: mysql,mythconverg,temp
         Replicate_Do_Table: 
     Replicate_Ignore_Table: 
    Replicate_Wild_Do_Table: 
Replicate_Wild_Ignore_Table: 
                 Last_Errno: 0
                 Last_Error: 
               Skip_Counter: 0
        Exec_Master_Log_Pos: 16250997
            Relay_Log_Space: 10831279
            Until_Condition: None
             Until_Log_File: 
              Until_Log_Pos: 0
         Master_SSL_Allowed: Yes
         Master_SSL_CA_File: /etc/ssl/certs/ca.pem
         Master_SSL_CA_Path: 
            Master_SSL_Cert: 
          Master_SSL_Cipher: 
             Master_SSL_Key: 
      Seconds_Behind_Master: 0

Step 6: Profit

In an ideal world, you will see Read_Master_Log_Pos change, as the slave processes queries from the master's binlog file. Profit!

If the Slave_IO_state remains on Connecting to master, you have a connectivity problem. Make sure the replication user has access to the master from the slave host, and that you are not blocking the connection via a firewall. You can check this via the mysql cli client.

My next step is to find or write a nagios plugin that can alert me if the Read_Master_Log_Pos does not change over time, or if the value of Seconds_Behind_Master starts to go up, but more on that once I have it sorted :-)


This is a repost of a blog entry from Peters' personal blog.