DRDB(Distributed Replicated Block Device) is a software package in a Linux-based system. Its aim is to duplicate storage equipment over a network from one node to another. It could cope with Disaster Recovery and Failovers. DRDB is highly available for hardware and it can work instead of a network shared storage.
DRDB works
Imagine we need to cluster a storage partition on two Linux systems(centos) we need a block device(like /dev/sdb1) on both systems, which are defined as primary and secondary nodes (can switch Primary and Secondary nodes). A virtual block device(like drbd0) is being used by DRDB to share the block devices of both systems.
Virtual drive drbd0 is mounted to on primary node for read/write purpose. we need to create a virtual disk drbd0 by installing DRDB package. We can format it as an xfs or ext3 filesystem to use /dev/drbd0 device. The drbd0 is accessed to use /dev/sdb1 block devices on both the systems. Then work only on drbd0 device.
The contents are only obtained from the main/primary node at one moment because drbd0 can only be installed on the main node. If unfortunately, the primary system crashes out then there is a chance for system files to get destroyed. But the virtual device drbd0 will be available. Then we can switch the secondary node as primary and can access the contents again.
Using DRBD on CentOS
This detail was performed in CentOS 7, but it should also work for other CentOS variants.
Requirements:
- Two systems (Centos installed)
- A free block-device like /dev/sdb1 on both systems
- SELinux Permissive or disabled
- Port 7788 allowed on the firewall
- The nodes must be within the same network.
Installation
Installation by adding epel repository since DRBD packages are not accessible on centos distributions
$rpm -ivh http:/www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm
and add GPG-key on both nodes as well. The GPG-key is the public key used to encrypt node communication.
$ rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-elrepo.org
Now use yum to install DRBD packages. Also, need to identify the DRBD versions supported by our kernel.
$ yum info *drbd* | grep Name
After that install the required version of DRBD along with necessary kernel modules
$ yum -y install drbd84-utils kmod-drbd84
Then check whether the kernel module is loaded or not
$ lsmod | grep -i drbd
if the out of the above command is empty, then kernel modules are not loaded. You need to reboot the system and try
$ modprobe drbd
modprobe
is a command that intelligently adds/removes modules from Linux kernel. To ensure the modules be loaded during each boot, systemd-modules-load service is used. So, create a file called drbd.conf
inside /etc/modulesload.d.
$ echo drbd > /etc/modules-load.d/drbd.conf
Configuring DRBD
Configuration files of DRDB are located at /etc/drbd.d/
By default, /etc/drbd.d/global_common.conf
is available global which contains the global or main configurations and others configuration files are called resource files with *.res extension. Now on both nodes, we need to create the resource configuration files to use DRBD for our specified block devices.
First, create a resource file named linuxhandbook.res
$ vi /etc/drbd.d/linuxhandbook.res
then copy and paste the below content to the resource file
resource linuxhandbook {
protocol C;
on node1 {
device /dev/drbd0;
disk /dev/sdb1;
address 10.20.222.14:7788;
meta-disk internal;
}
on node2 {
device /dev/drbd0;
disk /dev/sdb1;
address 10.20.222.15:7788;
meta-disk internal;
}
}
Here,
- linuxhandbook is the resource name. Resource names should always be unique.
- protocol C is used for synchronous communication.
- protocol A: Asynchronous replication protocol.
- protocol B: Semi-synchronous replication protocol.
- protocol C: This is mainly used for nodes in short distanced networks.
node 1 and node2 are the hostname of each node. It is used to identify the blocks.
device /dev/drbd0
is the logical device created to use a device.
disk /dev/sdb1
is the physical block device that drbd0 will occupy.
address 10.20.222.14:7788 and address 10.20.222.15:7788 are the IP address of two respective nodes with open TCP port 7788.
the meta-internal disk is used to define to use internal Metadata of disk.
The same configuration must be used in both nodes.
Then initialize the metadata storage on each node:
$ drbdadm create-md linuxhandbook
if this gives any error message, then you must create a dummy data file manually, and try above command afterwards.
$ dd if=/dev/zero of=/dev/sdb1 bs=1024k count=1024
Here dd command is used to create a random file of the specified memory
After logical device made usable, attach the drbd0 device to sdb1 disk on both nodes check the output using lsblk
$ lsblk
if the output is not the same as above then attach the drbd0 device to sdb1 disk through resource file.
$ drbdadm attach linuxhandbook
or
$ drbdadm up linuxhandbook
Once again try,
$ lsblk
Start and enable the DRBD service on each the nodes.
$ systemctl start drbd
$ systemctl enable drbd
the interesting thing is DRBD start may be quick for one node and take some time for another node.
Setting up Primary and Secondary Nodes
DRDB uses only one node at a time as a primary node where read and write can be performed.
First, we specify node 1 as the primary node
$ drbdadm primary linuxhandbook --force
Then check the status of the DRBD process:
$ cat /proc/drbd
or
$ drbd-overview
Here, the information we can get is currently which node is primary and secondary.
the data synchronization process.
DRBD device status. like: Inconsistent, Uptodate, Diskless.
Another node,node2 is automatically set as the secondary node. see the DRBD overview process status.
Hereafter, we format drbd0 as ext3.xfs file system
$ mkfs -t ext3 /dev/drbd0
Now, again at the primary node, we must mount the drbd0 device to be able to work on it.
$ mount /dev/drbd0 /mnt
you can select your required mount point.
Testing The DRBD Process
After DRBD has been set up at both the nodes and one node is made as primary. Here we mounted the device to /mnt location. Now create a file to test the synchronization of DRBD nodes.
$ touch /mnt/drbdtest.txt
$ ll /mnt/
After this, we will set node1 as secondary and node2 as primary. The process is similarly mirrored.
Also you can manage and visualize DRBD cluster nodes using LCMC (Linux Cluster Management Console). The server management services companies are using such method of testing.
” margin_top=”50px” margin_bottom=”” animation_type=”slide” animation_direction=”left” animation_speed=”0.3″ class=”” id=””]
Leave A Comment
You must be logged in to post a comment.