There are many descriptions about how to set MySQL replication – like here:

But I decided to make my own description to summarize my experiences with this. And have it ready in case of problems. Because being under pressure it is much easier to just follow detailed instructions…

Description presumes:

  • you can connect to both machines only with remote terminals
  • you can transfer backup file from master to replica using scp either directly or through your local machine

Preparations on master:

  • connect to the master and start tmux or some other tool which will preserve session in case of network problems
  • check available space on filesystems to see if and where you have enough space
  • in one connection start mysql command line tool on proper database
  • check size of the database using select like the one shown here:
    SELECT table_schema as "DB Name",
    Round(Sum(data_length)/1024/1024, 1) as "tables - MB",
    Round(Sum(index_length)/1024/1024, 1) as "indexes - MB" 
    FROM information_schema.tables 
    GROUP BY table_schema;

    Query shows separately size of tables and indexes in databases – you can estimate size of your dump based on tables size – indexes are dumped only as statements.


  • and lock tables against changes:
    Query OK, 0 rows affected (0.04 sec)
  • Check master status - you need to get current binary log filename and position in it:
    mysql> show master status;
    | File             | Position | Binlog_Do_DB | Binlog_Ignore_DB |
    | mysql-bin.000597 |   297908 | dbname       |                  |
    1 row in set (0.00 sec)
  • copy output from “show master status” command and store it somewhere
  • if this was not already done grant replication privileges to chosen user:
    GRANT REPLICATION SLAVE ON *.* TO 'replicationuser'@'%' IDENTIFIED BY 'replicationpassword';
  • keep mysql command line tool and connection running
  • Open another terminal connection (or tmux window) to master
  • Make backup of the database – it sounds like something very simple but it is not !
    • if your table data (without indexes) have size max. ~5 GB you can use standard solution “mysqldump to mysql command line tool”.
      • mysqldump -u root -p --opt yourdatabasename > yourfilename.sql
      • Option “–opt” may be seen as redundant because it should be available by default but I prefer to use it. This option gives you backup with all DDL commands like drops etc…
      • For explanation – dump is not so big problem – dump of our ~25 GB big database is even with standard “mysqldump” very quick – ~25 minutes. Which is completely OK in our environment. But restoration of the dump on intended replica can be a really huge problem. See below.
    • If your table data have size > ~5 GB especially if they have total size of several dozens GB then I strongly recommend to use mydumper/myloader open source solution – this solution allows not only parallel dump but also parallel restore which improves dramatically total time necessary for restore of the data
      • I use this command for dump:
        mydumper -u root -a --database yourdb --outputdir /home/youruser/dumpname --chunk-filesize 256 --build-empty-files --routines --triggers --complete-insert --threads=10
        • will ask for password
        • it will dump big tables into chunk files max 256MB big
        • it will use 10 parallel threads for dump
        • and will also dump all triggers and routines
        • complete insert statements are from my perspective good precaution…
        • all dump files will be in specified directory
  • to release locks on tables use command:
    • note: when exit mysql command line tool in session where you issued read lock command than it will end locking too


Preparations on replica:

  • connect to the replica and start tmux
  • transfer backup file (or backup directory) to the replica (most likely using scp command)
  • start mysql command line tool
  • if you repair broken or old already existing replica you MUST issue commands:
    stop slave;
    reset slave;
    • “reset slave” should delete old replica’s logs and file – but I recommend you to check it manually using “sudo ls -la /var/lib/mysql” – see files listed bellow
  • create target db (or drop existing with the same name and create new one)
  • if backup is bigger, open another connection for restoration of the backup and use first connection to monitor progress – periodically check size of the new database
  • restore database from linux command line using command:
    • if you dumped using mysqldump:
      mysql -u root -p yourdatabasename < yourfilename.sql
      • remember – restoration takes much longer then dump !
      • all indexes are always created newly on the new database in both solutions
      • totally unusable for big databases:
        • for example when I database reached ~15GB of table data + ~20GB of indexes restoration time went to 11+ hours…
        • later I did one test with ~22GB of table data + ~35GB of indexes and restoration took ~32 hours….
    • if you used mydumper use following command:
      myloader -u root -a --database yourdb --directory /home/youruser/dumpname --threads=10 --enable-binlog
      • will restore data in parallel using 10 threads
      • will write all changes into binlog
      • restoration is way quicker
        • for example restoration of our database with ~22GB of table data + ~35GB of indexes took ~8:30 hours – which is for us still reasonable for creation of the first replica (all others we create using direct cloning of the data files)
      • But be WARNED!!! If you set too high number of threats then most likely you will totally overload your server / cloud instance to the point when it will be totally irresponsive and inaccessible. On our environment I had to after several crashed tests use max. 3 threads per CPU leaving one CPU empty…
  • make changes in /etc/mysql/my.cnf file:
    • set unique server-id
    • consider setting “read_only” on replica (read_only=ON)
  • restart replica
  • Restart of new replica may fail with error like this:
    161110 14:16:21 [ERROR] Failed to open the relay log './mysqld-relay-bin.000001' (relay_log_pos 4)
    161110 14:16:21 [ERROR] Could not find target log during relay log initialization

    Problem is described here –

    So if it happens make sure mysqld is not running and issue commands like these:

    sudo rm /var/lib/mysql/
    sudo rm /var/lib/mysql/
    sudo rm /var/lib/mysql/mysqld-relay-bin.*
  • set master:
    CHANGE MASTER TO MASTER_HOST='xx.xx.xx.xx',MASTER_USER='replicationuser', MASTER_PASSWORD='replicationpassword', 
    MASTER_LOG_FILE='mysql-bin.00xxxx', MASTER_LOG_POS=xxxxxxxx;
  • start replication – START SLAVE;
  • check status – SHOW SLAVE STATUS\G
  • if you do not have some monitoring like mysql-exporter + Prometheus + Grafana I strongly recommend to monitor replication manually for some time using “show slave status”

Check on master:

  • check connected replicas – SHOW SLAVE HOSTS;
    • check if you see id of your replica
  • check log position and compare with replica – SHOW MASTER STATUS;


If you restart mysqld service on replica replication starts automatically. If you need to prevent it for any reason add into /etc/mysql/my.cnf file line “skip-slave-start” into [mysqld] part.