DRBD, OCFS2 and clusters with Ubuntu 18.04 LTS

Stuff about this website.

Moderator: embleton

Post Reply
User avatar
embleton
Site Admin
Posts: 401
Joined: Sat Aug 02, 2014 1:40 pm
Location: Plymouth
Contact:

DRBD, OCFS2 and clusters with Ubuntu 18.04 LTS

Post by embleton » Sun Mar 31, 2019 2:44 pm

An article on Distributed Replicated Block Device (DRBD), Oracle Cluster File System (OCFS2), Corosync Cluster Engine, Cluster Management pacemaker, MySQL master-master replication and clusters with Ubuntu 18.04 LTS for High-Availability (HA) for Apache (Web Server), MySQL (SQL Database), exim4 (SMTP), Dovecot (IMAP), phpBB and WordPress. I won't explain the installation of LAMP and phpBB for there are many sites that do this task. All the '>' are for a prompt from that node host and NOT to be entered unless exporting database(s) from MySQL with mysqldump.

The purpose of a DRBD & OCFS2 primary-primary mirror cluster is to have nodes concurrently in operation with a shared block device disks that can be mounted on both nodes. The below configuration has been done for 2 nodes and is fully operational at about 200Mbps across the nodes with the hardware below. The configuration is agnostic so should work on any hardware or virtual machine with a spare disk on each machine/VM. This configuration is known to work for this board uses it! ;)

The "/mnt" volume would be where you'd put folders such as "/mnt/www" for a shared website between the cluster nodes as an example. The users and groups have numerical permission identifiers, care should be taken that these align across the cluster nodes. When you create a user on one node it should be immediately created on the other node, as is the same for groups. It is best to have a cluster file system OCFS2 above the DRBD that allows both nodes to be operated concurrently.

2 * KODLIX GN41 4K@60Hz Mini PC's; Gemini Lake Celeron N4100 8GB memory & 64GB eMMC.
2 * SSD's 2.5 inches 240GB for GN41 DRBD disks.

#### configuration for /etc/hosts

192.168.1.29.node1
192.168.1.30 node2

#### DRBD and its configuration in /etc/drbd.conf on both hosts.

include "drbd.d/global_common.conf";
include "drbd.d/*.res";
resource r0 {
protocol C;

disk {
c-fill-target 10M;
c-max-rate 700M;
c-plan-ahead 7;
c-min-rate 4M;
}

net {
allow-two-primaries;
after-sb-0pri discard-zero-changes;
after-sb-1pri discard-secondary;
after-sb-2pri disconnect;
}

startup { become-primary-on both; }

floating 192.168.1.29:7788 {
device /dev/drbd0;
disk /dev/sda;
meta-disk internal;
}
floating 192.168.1.30:7788 {
device /dev/drbd0;
disk /dev/sda;
meta-disk internal;
}
}

#### OCFS2 and its configuration in /etc/ocfs2/cluster.conf on both hosts and spaces & placement in this configuration in the photo are important for it to work.

ocfs2.png
ocfs2.png (17.6 KiB) Viewed 176 times


#### Corosync Cluster Engine & Manager pacemaker is NOT needed it is used by my nodes for floating public IP.
@bootup CRON script in rc.local on both hosts.

#### CRM and its configuration script.

service drbd start
service ocfs2 start
/etc/inid.d/o2cb start
mount.ocfs2 /dev/drbd0 /mnt
service pacemaker start
service apache2 start
service exim4 start
service dovecot start

#### MySQL master-master replication and its configuration in /etc/mysql/my.cnf

# on node1.
mysqld]
server_id = 1
sync_binlog = 1
log_bin = mysql-bin.log
log_bin_index = mysql-bin.log.index
#binlog_do_db=mycaps,wordpress
relay_log = mysql-relay-bin
relay_log_index = mysql-relay-bin.index
binlog_ignore_db=mysql
expire_logs_days = 10
max_binlog_size = 100M
log_slave_updates = 1
auto_increment-increment = 2
auto-increment-offset = 1
bind-address = 0.0.0.0

# on node2.
[mysqld]
server_id = 2
sync_binlog = 1
log_bin = mysql-bin.log
log_bin_index = mysql-bin.log.index
relay_log = mysql-relay-bin
relay_log_index = mysql-relay-bin.index
binlog_ignore_db=mysql
expire_logs_days = 10
max_binlog_size = 100M
log_slave_updates = 1
auto_increment-increment = 2
auto-increment-offset = 2
bind-address = 0.0.0.0

-----------

#### install package for DRBD.
>apt-get install package drbd-utils ntp

#### Both will show as secondary when executing
>service drbd start

#### used for the status of sync.
>watch cat /proc/drbd

#### create the primary-primary mirror across PCs.
>drbdadm create-md r0

#### only do on the primary node until sync then run the same command on the secondary node.
>drbdadm primary r0

primary-primary.png
primary-primary.png (17.75 KiB) Viewed 125 times

#### install OCFS2 package tools.
>apt-get install ocfs2-tools

#### configure cluster: the answer to questions with defaults exception cluster name “y” for “Load O2CB driver on boot” and
#### answer “mental” to “Cluster to start on boot”
>dpkg-reconfigure ocfs2-tools

#### start o2cb cluster management and ocfs2 file service.
>/etc/init.d/ocfs2 start
>/etc/init.d/o2cb start

#### create OCFS2 mirror data sharing file system on one host only.
>mkfs.ocfs2 -L "Website" /dev/drbd0

#### If a MySQL replication master-to-master replication is out of sync.

#### Export MySQL database(s) on the up to date MySQL database(s) node.
>mysqldump mydatabase>mydatabase.sql
>mysql -u root -p
####>mysql prompt
drop database mydatabase;
#### the above dropping of the database should be done on both nodes.
#### restart the MySQL databases on both hosts.
>service mysql restart
>mysql -u root -p

####>MySQL prompt at node1
show master status;
#### Read the mysql-bin.xxxx name and log position number.

####>MySQL prompt on node2 enter mysql-bin.xxxx and log position from node1.
>mysql -u root -p
#>stop slave;
#>change master to master_host='192.168.1.29',master_user='replicator',master_password='changeme',master_log_file='mysql-bin.xxxx',master_log_pos=154;
#>start slave;
#>show master status;
#### Read the mysql-bin.xxxx name and log position for node2.

#### Back to MySQL prompt on node1 enter the log file name and log position from node2.
#>stop slave;
#>change master to master_host='192.168.1.30',master_user='replicator',master_password='changeme',master_log_file='mysql-bin.xxxx',master_log_pos=154;
#>start slave;

#### Import the MySQL database(s) on one node.
>mysql mydatabase<mydatabase.sql

#### Once the above is done for the MySQL replication it should reflect the same on the other node and you are back in sync.

#### CRON tab creation at root prompt.
>crontab -e
SHELL=/bin/sh
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin

@reboot /etc/rc.local 2&>1
#### enter the above in CRON tab.

Reboot either host node and it should sync to the other node and recover starting up the services for the backup or new operational node to be in service. I use a public IP that is floating between nodes with corosync and pacemaker.


My email address is mark @ domain of the forum. terms of use of site in announcements.

Post Reply