For those who seek help in different areas of software and hardware platform.

Set Up a Multi-node MySQL Replication Cluster on Ubuntu 18/19/20

Database clustering is particularly useful for highly active website and application. With replication, at least two or more MySQL servers act as a cluster. MySQL multi-node circular replication adds speed and redundancy. 

This tutorial is written for Ubuntu 18.04.4, 19.04, 19.10, 20.04 and Debian 9, 10 using a non-root user, which means commands that require elevated privileges are prefixed with sudo.


For this guide, we will use these three machines to set up MySQL multi-node circular replication cluster.

HOSTNAME IP ADDRESS PURPOSE
dbserver1 192.168.10.1 MySQL, HAProxy, Heartbeat
dbserver2 192.168.10.2 MySQL, HAProxy, Heartbeat
dbserver3 192.168.10.3 MySQL, HAProxy, Heartbeat

If you wish you can watch below video tutorial to set up your replication cluster quickly.



Prerequisites

To follow this tutorial, you will need three (physical or virtual) machines with Ubuntu 18.04.4 server installed on each machine having sudo non-root user privileges. 

You must set correct timezone on each of the nodes with below command:
sudo timedatectl set-timezone Asia/Karachi

Installing MySQL

Use the following commands to install MySQL on each node (dbserver1, dbserver2, dbserver3) in our case:
sudo apt update
sudo apt -y install mysql-server mysql-client
Run the script on each node (dbserver1, dbserver2, dbserver3) in our case to secure your mysql installation from unauthorized access.
sudo mysql_secure_installation
You will be asked to create a root password. It is recommended, you should select yes to all of the questions on the following prompts:
Securing the MySQL server deployment.
Connecting to MySQL using a blank password. VALIDATE PASSWORD PLUGIN can be used to test passwords and improve security. It checks the strength of password and allows the users to set only those passwords which are secure enough. Would you like to setup VALIDATE PASSWORD plugin? Press y|Y for Yes, any other key for No: y There are three levels of password validation policy: LOW Length >= 8 MEDIUM Length >= 8, numeric, mixed case, and special characters STRONG Length >= 8, numeric, mixed case, special characters and dictionary file Please enter 0 = LOW, 1 = MEDIUM and 2 = STRONG: 0 Please set the password for root here. New password: Re-enter new password: Estimated strength of the password: 100 Do you wish to continue with the password provided?(Press y|Y for Yes, any other key for No) : y By default, a MySQL installation has an anonymous user, allowing anyone to log into MySQL without having to have a user account created for them. This is intended only for testing, and to make the installation go a bit smoother. You should remove them before moving into a production environment. Remove anonymous users? (Press y|Y for Yes, any other key for No) : y Success. Normally, root should only be allowed to connect from 'localhost'. This ensures that someone cannot guess at the root password from the network. Disallow root login remotely? (Press y|Y for Yes, any other key for No) : y Success. By default, MySQL comes with a database named 'test' that anyone can access. This is also intended only for testing, and should be removed before moving into a production environment. Remove test database and access to it? (Press y|Y for Yes, any other key for No) : y - Dropping test database... Success. - Removing privileges on test database... Success. Reloading the privilege tables will ensure that all changes made so far will take effect immediately. Reload privilege tables now? (Press y|Y for Yes, any other key for No) : y Success. All done!

Configuring MySQL

Edit the /etc/mysql/my.cnf file on each node (dbserver1, dbserver2, dbserver3) in our case.

on dbserver1:
sudo cp -p /etc/mysql/mysql.conf.d/mysqld.cnf /etc/mysql/mysql.conf.d/mysqld.cnf.orig

sudo nano /etc/mysql/mysql.conf.d/mysqld.cnf
Add, update the following values under [mysqld] section and don't forget to replace bind-address with your dbserver1 ip address:
server_id = 1
log_bin = /var/log/mysql/mysql-bin.log log_bin_index = /var/log/mysql/mysql-bin.log.index relay_log = /var/log/mysql/mysql-relay-bin relay_log_index = /var/log/mysql/mysql-relay-bin.index expire_logs_days = 10 max_binlog_size = 100M log_slave_updates = 1auto-increment-increment = 3 auto-increment-offset = 3 bind-address = 192.168.10.1
on dbserver2:
sudo cp -p /etc/mysql/mysql.conf.d/mysqld.cnf /etc/mysql/mysql.conf.d/mysqld.cnf.orig
sudo nano /etc/mysql/mysql.conf.d/mysqld.cnf
Add, update the following values and replace the bind-address with dbserver2 ip:
server_id           = 2log_bin             = /var/log/mysql/mysql-bin.log
log_bin_index       = /var/log/mysql/mysql-bin.log.index
relay_log           = /var/log/mysql/mysql-relay-bin
relay_log_index     = /var/log/mysql/mysql-relay-bin.index
expire_logs_days    = 10
max_binlog_size     = 100M
log_slave_updates   = 1auto-increment-increment = 3
auto-increment-offset = 3
bind-address            = 192.168.10.2
on dbserver3:
sudo cp -p /etc/mysql/mysql.conf.d/mysqld.cnf /etc/mysql/mysql.conf.d/mysqld.cnf.orig
sudo nano /etc/mysql/mysql.conf.d/mysqld.cnf
Add, update the following values and replace bind-address with your dbserver3 ip
server_id           = 3
log_bin             = /var/log/mysql/mysql-bin.log
log_bin_index       = /var/log/mysql/mysql-bin.log.index
relay_log           = /var/log/mysql/mysql-relay-bin
relay_log_index     = /var/log/mysql/mysql-relay-bin.index
expire_logs_days    = 10
max_binlog_size     = 100M
log_slave_updates   = 1auto-increment-increment = 3
auto-increment-offset = 3
bind-address            = 192.168.10.3
Save and close file when you are done.

Once completed, restart the MySQL service on each node to take changes into effect:
sudo systemctl restart mysql

Creating Replication Users

Login to MySQL on each node (dbserver1, dbserver2, dbserver3) and create a replication user like below:

on dbserver1:
sudo mysql -u root -p
GRANT REPLICATION SLAVE ON *.* TO 'replication'@'192.168.10.2' IDENTIFIED BY 'P@ssw0rd';
GRANT REPLICATION SLAVE ON *.* TO 'replication'@'192.168.10.3' IDENTIFIED BY 'P@ssw0rd';
on dbserver2:
sudo mysql -u root -p
GRANT REPLICATION SLAVE ON *.* TO 'replication'@'192.168.10.1' IDENTIFIED BY 'P@ssw0rd';
GRANT REPLICATION SLAVE ON *.* TO 'replication'@'192.168.10.3' IDENTIFIED BY 'P@ssw0rd';
on dbserver3:
sudo mysql -u root -p
GRANT REPLICATION SLAVE ON *.* TO 'replication'@'192.168.10.1' IDENTIFIED BY 'P@ssw0rd';
GRANT REPLICATION SLAVE ON *.* TO 'replication'@'192.168.10.2' IDENTIFIED BY 'P@ssw0rd';
You can verify replication user authentication like below:

From dbserver1:
mysql -u replication -p -h 192.168.10.2 -P 3306
mysql -u replication -p -h 192.168.10.3 -P 3306


From dbserver2:
mysql -u replication -p -h 192.168.10.1 -P 3306
mysql -u replication -p -h 192.168.10.3 -P 3306


From dbserver3:
mysql -u replication -p -h 192.168.10.1 -P 3306
mysql -u replication -p -h 192.168.10.2 -P 3306

We are able to connect to the remote server’s MySQL instance as you can see in above screenshots. 

Configuring Database Replication

We will configure MySQL master-master circular replication topology which means in our three node replication cluster:

  • dberver1's master will be dbserver3
  • dberver2's master will be dbserver1
  • dbserver3's master will be dbserver2

To configure replication on each node, first login into MySQL prompt on (dbserver3 in our case) and query the master status with below command:
sudo mysql -u root -p
show master status;
Note the file and position values that are displayed in above command's output.

Now login to first node (dbserver1 in our case) at the MySQL prompt and set up the replication functionality for the database like below.
sudo mysql -u root -p
CHANGE MASTER TO master_host='192.168.10.3', master_port=3306, master_user='replication', master_password='P@ssw0rd', master_log_file='mysql-bin.000002', master_log_pos=154;
Next, login to second node (dbserver2 in our case) at the MySQL prompt, set up the replication functionality for the database like below:
sudo mysql -u root -p
CHANGE MASTER TO master_host='192.168.10.1', master_port=3306, master_user='replication', master_password='P@ssw0rd', master_log_file='mysql-bin.000002', master_log_pos=154;
Lastly, login to third node (dbserver3 in our caseat mysql prompt, set up the replication functionality for the database like below:
sudo mysql -u root -p
CHANGE MASTER TO master_host='192.168.10.2', master_port=3306, master_user='replication', master_password='P@ssw0rd', master_log_file='mysql-bin.000002', master_log_pos=154;
Next, go to first node (dbserver1 in our case) at MySQL prompt and type the below command to start replication process:
START SLAVE;
Wait 5 seconds, then type below command at MySQL prompt on dbserver1 to verify replication status:
SHOW SLAVE STATUS\G;
If the above command's output shows Slave_IO_Running=Yes and Slave_SQL_Running=Yes, replication is working.

Next, type the below command at MySQL prompt on dbserver2 to start replication process:
START SLAVE;
Wait 5 seconds, then type below command at MySQL prompt on dbserver2 to verify the replication status:
SHOW SLAVE STATUS\G;
If the above command's output shows Slave_IO_Running=Yes and Slave_SQL_Running=Yes, replication is working.

Next, type the below command at MySQL prompt on dbserver3 to start the replication process:
START SLAVE;
Wait 5 seconds, then type below command at MySQL prompt on dbserver3 to verify replication status:
SHOW SLAVE STATUS\G;
If the above command's output shows Slave_IO_Running=Yes and Slave_SQL_Running=Yes, replication is working.

Testing MySQL Replication

You can simply test replication between the servers by creating a dummy database and inserting a dummy row on dbserver1 like below:
sudo mysql -u root -p
Type the below command on mysql prompt to create dummy database reptest:
create database reptest;
create table reptest.table (`id` varchar(10));
show tables in reptest;
Type the below command at mysql prompt on dbserver2 to see if dummy databases reptest replicated:
show databases;
show tables in reptest;
When queried, you should see the reptest and its tables from dbserver1 replicated on dbserver2

Next, type the below command at mysql prompt on dbserver3 to see if dummy database reptest replicated:
show databases;
show tables in reptest;
When queried, you should see the reptest and its tables from dbserver1 replicated on dbserver3.

Installing HAProxy

You will need to install haproxy on each node (dbserver1, dbserver2, dbserver3) in our case with below command:
sudo apt -y install haproxy

Configuring HAProxy

To configure haproxy, you need to edit /etc/haproxy/haproxy.cfg and add few configuration parameters like below. 

Login to your first node (dbserver1) in our case, take backup of haproxy.cfg file with below command:
sudo cp -p /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.orig
Next edit /etc/haproxy/haproxy.cfg file:
sudo nano /etc/haproxy/haproxy.cfg
add below configuration parameters at the end of the file and make sure you replace highlighted text with yours:
listen stats
    bind 192.168.10.10:7000
    stats enable
    stats hide-version
    stats uri /
    stats auth admin:P@ssw0rd
listen mysql-cluster
    bind 192.168.10.10:3306
    mode tcp
    option mysql-check user haproxy_check
    balance roundrobin
    server dbserver1 192.168.10.1:3306 check
    server dbserver2 192.168.10.2:3306 check
    server dbserver3 192.168.10.3:3306 check
Save and close file when you are finished.

Repeat these steps on dbserver2, dbserver3 as well.

Next, we need to create an additional user on MySQL servers. This user will be used by HAProxy for checking health status of MySQL on each node.

Note that haproxy_check user must be created without any password because haproxy can not handle password authentication for health status check.

Login to any of the nodes at MySQL prompt with below command:
sudo mysql -u root -p
Type below command at MySQL prompt to create a user (haproxy_check):
create user 'haproxy_check'@'%';
flush privileges;
Create one more additional user with password having database root privileges on MySQL servers to test load balancing functionality later via haproxy

Type below command at MySQL prompt to create root privileges user:
create user 'haproxy_root'@'%' identified by 'P@ssw0rd';
grant all privileges on *.* to 'haproxy_root'@'%';
You can verify on remaining nodes at MySQL prompt if haproxy_check and haproxy_root user replicated like below:
sudo mysql -u root -p
select user, Host FROM mysql.user;
If the above command's output shows haproxy_check and haproxy_root users are replicated, you are good to go.



To make virtual IP floating across the database servers (dbserver1, dbserver2, dbserver3) in our case, we need to add net.ipv4.ip_nonlocal_bind=1 directive in sysctl option on each node

Edit /etc/sysctl.conf on dbserver1:
sudo nano /etc/sysctl.conf
Add below directive:
net.ipv4.ip_nonlocal_bind=1
Save and close file.

Next, type below command:
sudo sysctl -p
Repeat the same on dbserver2, dbserver3 as well.

When you are finished, restart HAProxy on all of the nodes with below command:
sudo systemctl restart haproxy
You can verify haproxy listening stat with below command on each node:
sudo netstat -ntlp

Installing and Configuring Heartbeat

In this step, we will install and configure Heartbeat on each node to make the virtual IP functional.  This virtual IP will automatically float to second available node if  primary node goes down.

Type below command to install hearbeat on each node (dbserver1, dbserver2, dbserver3) in our case:
sudo apt -y install heartbeat
Create a /etc/ha.d/authkeys file with same content on each node:
sudo nano /etc/ha.d/authkeys
add below configuration parameters and make sure you change the P@ssw0rd with your strong password:
auth 1
1 md5 P@ssw0rd
Save and close file.

Apply appropriate permission with below command:
sudo chmod 600 /etc/ha.d/authkeys
Repeat the same on dbserver2, dbserver3 as well.

This /etc/ha.d/authkeys file will be used by heartbeat to store data to authenticate each other node.

Next, we will create a main configuration file /etc/ha.d/ha.cf for Heartbeat on each node.

Type below command on dbserver1:
sudo nano /etc/ha.d/ha.cf
Add below configuration parameters but make sure you replace highlighted text with yours:
keepalive 2
deadtime 10

udpport        694
bcast  ens33
mcast ens33 225.0.0.1 694 1 0
ucast ens33 192.168.10.2
ucast ens33 192.168.10.3
udp     ens33

logfacility     local0

node    dbserver1
node    dbserver2
node    dbserver3
Save and close file.

Type below command on dbserver2:
sudo nano /etc/ha.d/ha.cf
Add below configuration parameters but make sure you replace highlighted text with yours:
keepalive 2
deadtime 10

udpport        694
bcast  ens33
mcast ens33 225.0.0.1 694 1 0
ucast ens33 192.168.10.1
ucast ens33 192.168.10.3
udp     ens33

logfacility     local0

node    dbserver1
node    dbserver2
node    dbserver3
Save and close file.

Type below command on dbserver3:
sudo nano /etc/ha.d/ha.cf
Add below configuration parameters but make sure you replace highlighted text with yours:
keepalive 2
deadtime 10

udpport        694
bcast  ens33
mcast ens33 225.0.0.1 694 1 0
ucast ens33 192.168.10.1
ucast ens33 192.168.10.2
udp     ens33

logfacility     local0

node    dbserver1
node    dbserver2
node    dbserver3
Save and close file.

Lastly, we need to create the /etc/ha.d/haresources file with same content on dbserver1, dbserver2 and dbserver3.

Type below command on dbserver1:
sudo nano /etc/ha.d/haresources
Add your shared ip address (192.168.10.10) and master node (dbserver1) in our case by default:
dbserver1 192.168.10.10
Save and close file.

Repeat the same on dbserver2, dbserver3:

When you are finished with the above, restart Heartbeat service on each node with below command:
sudo systemctl restart heartbeat
Repeat same on dbserver2, dbserver3 as well.

Type below command on dbserver1 to verify if heartbeat activated virtual ip:
ip addr show


You will see that on dbserver1 you have the virtual ip up and running.

As you can see virtual IP is assigned to first node dbserver1 and HAProxy listening on it, so we can check how it works.

You can simply make a test requests from any of your client machines with mysql-client package installed using below command:
mysql -h 192.168.10.10 -u haproxy_root -p -e "show variables like 'server_id'"
This will ask you for haproxy_root user's password, enter password your created earlier.

You will see the output similar to like below, shows that "round-robin" load balancing working on all of the nodes.


Now we need to check the failover when the dbserver1 will go offline. For example, restart or shutdown the dbserver1 manually, and check that virtual IP was moved to second available node (dbserver2) in our case and requests to the MySQL servers are still okay.

Type below command from client machine again.
mysql -h 192.168.10.10 -u haproxy_root -p -e "show variables like 'server_id'"
Also check the HAProxy status with any of your preferred browser navigating to http://192.168.10.10:7000 using user admin and password.

You will see the haproxy dashboard with your database server's information like below:


Wrapping up 

You now have a three node MySQL circular replication cluster with "round-robin" load balancing using haproxy ready to use for your production environment.
Share:

3 comments:

  1. AnonymousJuly 31, 2020

    Hello,

    After run "$ sudo apt -y install heartbeat" command getting error which is mentioned below:
    info: See also: http://linux-ha.org/wiki/Ha.cf#node_directive
    WARN: Deprecated 'legacy' auto_failback option selected.
    WARN: Please convert to 'auto_failback on'.
    WARN: See documentation for conversion details.
    WARN: Logging daemon is disabled --enabling logging daemon is recommended
    ERROR: Configuration error, heartbeat not started.
    heartbeat.service: Control process exited, code=exited status=6
    Failed to start LSB: High-availability services..
    heartbeat.service: Unit entered failed state.
    heartbeat.service: Failed with result 'exit-code'.
    heartbeat.service: Control process exited, code=exited status=6
    Failed to start LSB: High-availability services..
    heartbeat.service: Unit entered failed state.
    heartbeat.service: Failed with result 'exit-code'.

    Please help out me.

    ReplyDelete
    Replies
    1. Pay attention to suggested option in the output you shared. Fix them first

      WARN: Please convert to 'auto_failback on'.

      WARN: Logging daemon is disabled --enabling logging daemon is recommended

      Delete
  2. Hello Sir,
    I have fixed 'auto_failback on'. can you give some direction how to enabling logging daemon.

    Here is latest output:
    heartbeat: baudrate setting must precede media statementsheartbeat[5275]: info: Pacemaker support
    ERROR: Current node [mysql2-virtualbox] not in configuration!
    info: By default, cluster nodes are named by `uname -n` and must be declared wit
    info: See also: http://linux-ha.org/wiki/Ha.cf#node_directive
    WARN: Logging daemon is disabled --enabling logging daemon is recommended
    heartbeat[5275]: 2020/08/01_13:29:44 ERROR: Configuration error, heartbeat not started.
    heartbeat.service: Control process exited, code=exited status=6
    Failed to start LSB: High-availability services..
    heartbeat.service: Unit entered failed state.
    heartbeat.service: Failed with result 'exit-code'.

    ReplyDelete

Comments with links will not be published.

Video Tutorials