How to get DRBD to automatically start after reboot, mount volume, start service, and configure primary/secondary

debian-stretchdrbdext3high-availabilitymount

I have a great working version of DRBD across two Debian Stretch servers which I created by following this awesome guide:
https://www.howtoforge.com/setting-up-network-raid1-with-drbd-on-debian-squeeze-p2/

But after each reboot I have to redo a number of things to get it into a working state again.

Here is what I see when it's working, before reboot:

root@server1:~# cat /proc/drbd
version: 8.4.7 (api:1/proto:86-101)
srcversion: AC50E9301653907249B740E
0: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r-----
ns:8 nr:0 dw:4 dr:1209 al:1 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0

On server 2:

root@server2:~# cat /proc/drbd
version: 8.4.7 (api:1/proto:86-101)
srcversion: AC50E9301653907249B740E
0: cs:Connected ro:Secondary/Primary ds:UpToDate/UpToDate C r-----
ns:0 nr:8 dw:8 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0

On server 1:

root@server1:~# mount
...
/dev/drbd0 on /var/www type ext3 (rw,relatime,data=ordered)

And here is what I see after reboot. All is working but mounting, service start, and primary/secondary configurations is lost. I've tried to add drbd to the start by doing this:

update-rc.d drbd defaults

on both servers this doesn't seem to work. DRBD simply doesn't start but manually doing /etc/init.d/drdb start works fine on both servers.

Also I'm unsure if I can just add the DRBD volumes to fstab because surely it won't work if the DRBD service isn't even started? I've read about using _netdev in fstab but various combinations of fstab entries didn't work out.


Finally I also have to set the primary and secondary status of DRBD every time I restart and then remount the volume manually.

So this is how I am getting it working after reboot:

On server 1:

root@server1:/etc# /etc/init.d/drbd status
● drbd.service - LSB: Control DRBD resources.
Loaded: loaded (/etc/init.d/drbd; generated; vendor preset: enabled)
Active: inactive (dead)
Docs: man:systemd-sysv-generator(8)

root@server1:/etc# /etc/init.d/drbd start
[ ok ] Starting drbd (via systemctl): drbd.service.
root@jmtest1:/etc# cat /proc/drbd
version: 8.4.7 (api:1/proto:86-101)
srcversion: AC50E9301653907249B740E
0: cs:Connected ro:Secondary/Secondary ds:UpToDate/UpToDate C r-----
ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0

root@server1:/etc# drbdadm primary r0

root@server1:/etc# cat /proc/drbd
version: 8.4.7 (api:1/proto:86-101)
srcversion: AC50E9301653907249B740E
0: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r-----
ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0

On server 2:

root@server2:~# /etc/init.d/drbd status
● drbd.service - LSB: Control DRBD resources.
Loaded: loaded (/etc/init.d/drbd; generated; vendor preset: enabled)
Active: inactive (dead)
Docs: man:systemd-sysv-generator(8)

root@server2:~# /etc/init.d/drbd start
[ ok ] Starting drbd (via systemctl): drbd.service.

root@server2:~# cat /proc/drbd
version: 8.4.7 (api:1/proto:86-101)
srcversion: AC50E9301653907249B740E
0: cs:Connected ro:Secondary/Secondary ds:UpToDate/UpToDate C r-----
ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0

root@server2:~# drbdadm secondary r0

root@server2:~# cat /proc/drbd
version: 8.4.7 (api:1/proto:86-101)
srcversion: AC50E9301653907249B740E
0: cs:Connected ro:Secondary/Primary ds:UpToDate/UpToDate C r-----
ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0

Some fstab entries that I tried:

/dev/drbd0 /var/www ext3 _netdev 0 2
UUID=042cc2e395b2b32 /var/www ext3 none,noauto 0 0

Not sure if you're supposed to use the UUID or just /dev/drbd0

I have these questions as to why it's not starting:

  1. What FSTAB entries are supposed to be there?
  2. Why does update-rc.d drbd defaults not work?
  3. Why do I have to reset primary and secondary on both servers after each restart?

Best Answer

You should definitely consider using a product which is made for dealing with such problems.

I described in this post [ Nagios/Icinga: Don't show CRITICAL for DRBD partitions on standby node ] how I manage to do exactly what you expect using opensvc, and it works fine for years.

  • no need for fstab entries as mounts are described in the opensvc service configuration file, which is automatically synchronized between opensvc cluster nodes

  • no need to setup update-rc.d drbd defaults because opensvc stack is modprobing the drbd module when it sees that you have drbd resources in some services, and then brings up drbd in primary/secondary states

  • to reach primary/secondary at boot, just setup the nodes parameter in the DEFAULT section of the opensvc service configuration file.

    If you want server1 as primary and server2 as secondary, just set nodes=server1 server2 using the command svcmgr -s mydrbdsvc set --kw DEFAULT.nodes="server1 server2"

  • if you only want the service to be started at boot on server1, set the orchestrate=start parameter using the command svcmgr -s mydrbdsvc set --kw DEFAULT.orchestrate=start

  • if you only want the service to be orchestrated in high availability mode (meaning automatic failover between nodes), set the orchestrate=ha parameter using the command svcmgr -s mydrbdsvc set --kw DEFAULT.orchestrate=ha

  • to relocate the service from one node to the other, you can use svcmgr -s mydrbdsvc switch command

Related Topic