Set Up a MariaDB Cluster using Galera on Ubuntu 18.04

This tutorial will take you through the steps to set up an active-active MariaDB highly available cluster on Ubuntu 18.04.

To follow this guide, you will need three Ubuntu 18.04 (virtual or physical) machines each with a non-root user with sudo privileges.

Set Up MariaDB Repositories to All Nodes
First, we'll add the MariaDB repository key with the apt-key command, which the APT package manager will use to verify that the package is authentic:

sudo apt-key adv --recv-keys --keyserver hkp:// 0xF1656F24C74CD1D8

Once you have the trusted key in the database, you can add the repository with the following command:

sudo add-apt-repository 'deb [arch=amd64,arm64,ppc64el] bionic main'

After adding the repository, run apt update in order to include package manifests from the new repository:

sudo apt update

Once you have completed this step on your first node, repeat for your second and third nodes.

Installing MariaDB on All Nodes
The latest available MariaDB Server and MariaDB Galera Server packages are combined, so installing mariadb-server will automatically install Galera and several dependencies:

sudo apt install mariadb-server

From MariaDB version 10.4 onwards, the root MariaDB user does not have a password by default. To set a password for the root user, start by logging into MariaDB:

sudo mysql -u root

Once you're inside the MariaDB shell, change the password by executing the following statement:

set password = password("your_password");

You will see the following output indicating that the password was set correctly:

Query OK, 0 rows affected (0.001 sec)

Exit the MariaDB shell by running the following command:


You now have all of the pieces necessary to begin configuring the cluster, but since you'll be relying on rsync in later steps, make sure it's installed:

sudo apt install rsync

This will confirm that the newest version of rsync is already available or prompt you to upgrade or install it.

Configuring the First Node

By default, MariaDB is configured to check the /etc/mysql/conf.d directory to get additional configuration settings from files ending in .cnf. Create a file in this directory with all of your cluster-specific directives:

sudo nano /etc/mysql/conf.d/galera.cnf

Add the following configuration into the file. The configuration specifies different cluster options, details about the current server and the other servers in the cluster, and replication-related settings. Note that the IP addresses in the configuration are the private addresses of your respective servers; replace the highlighted lines with the appropriate IP addresses.


# Galera Provider Configuration

# Galera Cluster Configuration

# Galera Synchronization Configuration

# Galera Node Configuration

When you are satisfied with your cluster configuration file, copy the contents into your clipboard, save and close the file. With the nano text editor, you can do this by pressing CTRL+X, typing y, and pressing ENTER.

Now that you have configured your first node successfully, you can move on to configuring the remaining nodes in the next section.

Configuring the Remaining Nodes
In this step, you will configure the remaining two nodes. On your second node, open the configuration file:

sudo nano /etc/mysql/conf.d/galera.cnf

Paste in the configuration you copied from the first node, then update the Galera Node Configuration to use the IP address or resolvable domain name for the specific node you're setting up. Finally, update its name, which you can set to whatever helps you identify the node in your log files:

# Galera Node Configuration

Save and exit the file.

Once you have completed these steps, repeat them on the third node.

Opening the Firewall on All Nodes
In this step, you will configure your firewall so that the ports required for inter-node communication are open. On every server, check the status of the firewall by running:

sudo ufw status

In this case, only SSH is allowed through:

Status: active

To                         Action      From
--                         ------      ----
OpenSSH                    ALLOW       Anywhere
OpenSSH (v6)            ALLOW       Anywhere (v6)

Since only SSH traffic is permitted in this case, you’ll need to add rules for MySQL and Galera traffic. If you tried to start the cluster, it would fail because of firewall rules.

Open the ports with the following command:

sudo ufw allow 3306,4567,4568,4444/tcp
sudo ufw allow 4567/udp

After you have configured your firewall on the first node, create the same firewall settings on the second and third node.

Starting the Cluster
In this step, you will start your MariaDB cluster. To begin, you need to stop the running MariaDB service so that you can bring your cluster online.

Use the following command on all three servers to stop MariaDB so that you can bring them back up in a cluster:

sudo systemctl stop mysql
sudo systemctl status mysql

Jun 25 12:57:40 node-01 systemd[1]: Stopped MariaDB 10.4.4 database server.

Once you've shut down mysql on all of the nodes, you're ready to proceed.

To start first node, you'll need to use a special startup script. The way you've configured your cluster, each node that comes online tries to connect to at least one other node specified in its galera.cnf file to get its initial state. Without using the galera_new_cluster script that allows systemd to pass the --wsrep-new-cluster parameter, a normal systemctl start mysql would fail because there are no nodes running for the first node to connect with.

sudo galera_new_cluster

This command will not display any output on successful execution. When this script succeeds, the node is registered as part of the cluster, and you can see it with the following command:

mysql -u root -p -e "SHOW STATUS LIKE 'wsrep_cluster_size'"

You will see the following output indicating that there is one node in the cluster:

| Variable_name      | Value |
| wsrep_cluster_size | 1     |

On the remaining nodes, you can start mysql normally. They will search for any member of the cluster list that is online, so when they find one, they will join the cluster.

Now you can start the second node. Start mysql:

sudo systemctl start mysql

No output will be displayed on successful execution. You will see your cluster size increase as each node comes online:

mysql -u root -p -e "SHOW STATUS LIKE 'wsrep_cluster_size'"

You will see the following output indicating that the second node has joined the cluster and that there are two nodes in total.

| Variable_name      | Value |
| wsrep_cluster_size | 2     |

It's now time to start the third node. Start mysql:

sudo systemctl start mysql

Run the following command to find the cluster size:

mysql -u root -p -e "SHOW STATUS LIKE 'wsrep_cluster_size'"

You will see the following output, which indicates that the third node has joined the cluster and that the total number nodes in the cluster is three.

| Variable_name      | Value |
| wsrep_cluster_size | 3     |

At this stage, the entire cluster is online and communicating successfully. Next, you can ensure the working setup by testing replication in the next section.

Testing Replication
You've gone through the steps up to this point so that your cluster can perform replication from any node to any other node, known as active-active replication. Follow the steps below to test and see if the replication is working as expected.

You'll start by making database changes on your first node. The following commands will create a database called playground and a table inside of this database called equipment.

mysql -u root -p -e 'CREATE DATABASE playaround;


INSERT INTO (type, quant, color) VALUES ("slide", 2, "blue");'

You now have one value in your table.

Next, look at the second node to verify that replication is working:

mysql -u root -p -e 'SELECT * FROM;'

If replication is working, the data you entered on the first node will be visible here on the second:

| id | type  | quant | color |
|  1 | slide |     2 | blue  |

From this same node, you can write data to the cluster:

mysql -u root -p -e 'INSERT INTO (type, quant, color) VALUES ("swing", 10, "yellow");'

From the third node, you can read all of this data by querying the table again:

mysql -u root -p -e 'SELECT * FROM;'

You will see the following output showing the two rows:

   | id | type  | quant | color  |
   |  1 | slide |     2 | blue   |
   |  2 | swing |    10 | yellow |

Again, you can add another value from this node:

mysql -u root -p -e 'INSERT INTO (type, quant, color) VALUES ("seesaw", 3, "green");'

Back on the first node, you can verify that your data is available everywhere:

mysql -u root -p -e 'SELECT * FROM;'

You will see the following output which indicates that the rows are available on the first node.

   | id | type   | quant | color  |
   |  1 | slide  |     2 | blue   |
   |  2 | swing  |    10 | yellow |
   |  3 | seesaw |     3 | green  |

You've verified successfully that you can write to all of the nodes and that replication is being performed properly.

Wrapping up
You have configured three-node Galera cluster with MariaDB. If you plan on using a Galera cluster in a production situation, it’s recommended that you begin with no fewer than five nodes.

No comments:

Powered by Blogger.