HA iSCSI Target with DRBD (2 node cluster how-to)

Issues related to applications and software problems
hunter86_bg
Posts: 2019
Joined: 2015/02/17 15:14:33
Location: Bulgaria
Contact:

Re: HA iSCSI Target with DRBD (2 node cluster how-to)

Post by hunter86_bg » 2019/07/29 03:52:47

Yeah, it seems that you will have to wait for the bug fix in 7.7 . I's really strange that it took them an year to focus on the bug.
Check the other 2 issues (edit2/3 ) , as you have some typos and missing order/colocation constraints.

ladam@ictuniverse.eu
Posts: 12
Joined: 2019/07/14 12:17:27

Re: HA iSCSI Target with DRBD (2 node cluster how-to)

Post by ladam@ictuniverse.eu » 2019/08/04 11:41:25

Hello,

Your installation guide is based on a redhat7.x distri?

Regards

hunter86_bg
Posts: 2019
Joined: 2015/02/17 15:14:33
Location: Bulgaria
Contact:

Re: HA iSCSI Target with DRBD (2 node cluster how-to)

Post by hunter86_bg » 2019/08/04 13:35:53

RHEL is the upstream for CentOS.
Which means that CentOS is a RHEL without the "Red Hat" logos, trademarks and other proprietary stuff.
Yet, CentOS and RHEL are fully binary compatible ,as they are created from the same source.

In this case - Red Hat will provide the new version and once the source is available - the CentOS project will rebuild it .

ladam@ictuniverse.eu
Posts: 12
Joined: 2019/07/14 12:17:27

Re: HA iSCSI Target with DRBD (2 node cluster how-to)

Post by ladam@ictuniverse.eu » 2019/08/04 15:25:21

Hello,
I think i got the solution.
I just add the portal into the step 12
rm -f /root/cluster
pcs cluster cib /root/cluster
pcs -f /root/cluster resource create iscsi-target ocf:heartbeat:iSCSITarget portals="192.168.200.244:3260" iqn="iqn.2019-08.ict:sn.1234567890" allowed_initiators="iqn.1998-01.com.vmware:esx12-6f0bfd40" --group iscsi
pcs -f /root/cluster resource create iscsi-lun0 ocf:heartbeat:iSCSILogicalUnit target_iqn=iqn.2019-08.ict:sn.1234567890 lun=0 path=/dev/drbd0 --group iscsi
pcs cluster cib-push /root/cluster

I'm still doing some test but ......

Thanks

hunter86_bg
Posts: 2019
Joined: 2015/02/17 15:14:33
Location: Bulgaria
Contact:

Re: HA iSCSI Target with DRBD (2 node cluster how-to)

Post by hunter86_bg » 2019/08/04 16:52:43

Check the 2 issues I have mentioned several posts behind .
Also, keep in mind that if you want to avoid splitbrain, you can always add a third node - 1 CPU, 1 GB RAM VM.
Of course, you will need some extra steps.

ladam@ictuniverse.eu
Posts: 12
Joined: 2019/07/14 12:17:27

Re: HA iSCSI Target with DRBD (2 node cluster how-to)

Post by ladam@ictuniverse.eu » 2019/08/10 07:55:16

Hello all,

This config works fine on Centos 7

yum -y install http://www.elrepo.org/elrepo-release-7. ... noarch.rpm
yum -y install fence-agents-all pcs targetcli "*drbd90*" vim-enhanced bash-completion net-tools bind-utils mlocate setroubleshoot-server policycoreutils-{python,devel} iscsi-initiator-utils


vgcreate drbd /dev/sdb
lvcreate -l 100%FREE -n drbd0 drbd
vi /etc/lvm/lvm.conf

# Configuration option devices/global_filter.
# Use global_filter to hide devices from these LVM system components.
# global_filter are not opened by LVM.


global_filter = [ "r|/dev/drbd/drbd0|", "r|/dev/drbd0|" ]
vi /etc/drbd.d/drbd0.res

resource drbd0 {
net {
cram-hmac-alg sha1;
shared-secret "FooFunFactory";
}
volume 0 {
device /dev/drbd0;
disk /dev/drbd/drbd0;
meta-disk internal;
}
on san1 {
node-id 0;
address 192.168.200.20:7000;
}
on san2 {
node-id 1;
address 192.168.200.21:7000;
}
connection {
host san1 port 7000;
host san2 port 7000;
net {
protocol C;
}
}
}


drbdadm create-md drbd0
drbdadm up drbd0

drbdadm --force primary drbd0


systemctl enable --now iscsid.service
systemctl enable --now pcsd
echo centos | passwd --stdin hacluster
pcs cluster auth san1 san2

pcs cluster setup --start --enable --name CentOS-DRBD-iSCSI san1 san2 --transport udpu --wait_for_all=1 --encryption 1

pcs property set stonith-enabled=false
pcs property set no-quorum-policy=ignore

pcs cluster cib /root/cluster
pcs -f /root/cluster resource create DRBD0 ocf:linbit:drbd drbd_resource=drbd0
pcs -f /root/cluster resource master MASTER-DRBD0 DRBD0 meta master-max=1 master-node-max=1 clone-max=2 clone-node-max=1 notify=true
pcs cluster cib-push /root/cluster

pcs resource create iscsi-ip ocf:heartbeat:IPaddr2 ip=192.168.200.244 cidr_netmask=24 --group iscsi

rm -f /root/cluster
pcs cluster cib /root/cluster
pcs -f /root/cluster constraint order promote MASTER-DRBD0 then start iscsi Mandatory id=iscsi-always-after-master-drbd
pcs -f /root/cluster constraint colocation add iscsi with master MASTER-DRBD0 INFINITY id=iscsi-group-where-master-drbd
pcs cluster cib-push /root/cluster

systemctl enable --now target.service

rm -f /root/cluster
pcs cluster cib /root/cluster
pcs -f /root/cluster resource create iscsi-target ocf:heartbeat:iSCSITarget portals="192.168.200.244:3260" iqn="iqn.2019-08.ict:sn.1234567890" allowed_initiators="iqn.1998-01.com.vmware:esx12-6f0bfd40" --group iscsi
pcs -f /root/cluster resource create iscsi-lun0 ocf:heartbeat:iSCSILogicalUnit target_iqn=iqn.2019-08.ict:sn.1234567890 lun=0 path=/dev/drbd0 --group iscsi
pcs cluster cib-push /root/cluster

hunter86_bg
Posts: 2019
Joined: 2015/02/17 15:14:33
Location: Bulgaria
Contact:

Re: HA iSCSI Target with DRBD (2 node cluster how-to)

Post by hunter86_bg » 2019/08/10 08:15:47

Don't forget the moat important task - testing the cluster.
Also check if the failover occurs properly when the client is using the iSCSI as a LVM PV.
lvm.conf should be the same on both nodes.

ladam@ictuniverse.eu
Posts: 12
Joined: 2019/07/14 12:17:27

Re: HA iSCSI Target with DRBD (2 node cluster how-to)

Post by ladam@ictuniverse.eu » 2019/08/10 13:33:30

Thanks,

Actually we are in testing phase.
Question : sync seems to be really slow. How can we check that ? (DRBD9)
I have 10G direct link between the 2 servers
my config is now:
resource drbd0 {
disk {
on-io-error detach;
no-disk-flushes ;
no-disk-barrier;
c-plan-ahead 0;
c-fill-target 24M;
c-min-rate 80M;
c-max-rate 1000M;
}
net {
# max-epoch-size 20000;
max-buffers 36k;
sndbuf-size 1024k ;
rcvbuf-size 2048k;
cram-hmac-alg sha1;
shared-secret "FooFunFactory";
}
volume 0 {
device /dev/drbd0;
disk /dev/drbd/drbd0;
meta-disk internal;
}
on san3 {
node-id 0;
address 10.0.0.30:7000;
}
on san4 {
node-id 1;
address 10.0.0.31:7000;
}
connection {
host san3 port 7000;
host san4 port 7000;
net {
protocol C;
}
}
}

hunter86_bg
Posts: 2019
Joined: 2015/02/17 15:14:33
Location: Bulgaria
Contact:

Re: HA iSCSI Target with DRBD (2 node cluster how-to)

Post by hunter86_bg » 2019/08/10 15:18:30

'drbdadm status' should show the sync status.

ladam@ictuniverse.eu
Posts: 12
Joined: 2019/07/14 12:17:27

Re: HA iSCSI Target with DRBD (2 node cluster how-to)

Post by ladam@ictuniverse.eu » 2019/08/10 15:28:02

Yes. But not the speed.
It took 7 days to sync 1.8 TB over 1GB ethernet speed
Now I have setup a 10 GB, but it seems but slow and it will take a month for 8TB

regards
[root@san3 log]# drbdadm status
drbd0 role:Primary
disk:UpToDate
san4 role:Secondary
replication:SyncSource peer-disk:Inconsistent done:0.02

Post Reply