I created several GlusterFS volumes replicated over 9 machines. All of the mounts (128 to be exact) are managed by systemd.
The systemctl
command shows a random subset of the mounts as "failed" (see listing below). From the log messages I guess that's because the network or the gluster daemon wasn't ready at that time.
mnt-gluster-gluster\x2d119.mount loaded active mounted /mnt/gluster/gluster-119
mnt-gluster-gluster\x2d12.mount loaded active mounted /mnt/gluster/gluster-12
● mnt-gluster-gluster\x2d120.mount loaded failed failed /mnt/gluster/gluster-120
mnt-gluster-gluster\x2d122.mount loaded active mounted /mnt/gluster/gluster-122
mnt-gluster-gluster\x2d123.mount loaded active mounted /mnt/gluster/gluster-123
● mnt-gluster-gluster\x2d124.mount loaded failed failed /mnt/gluster/gluster-124
mnt-gluster-gluster\x2d125.mount loaded active mounted /mnt/gluster/gluster-125
mnt-gluster-gluster\x2d126.mount loaded active mounted /mnt/gluster/gluster-126
I think it would be a sufficient solution to just retry all the failed mounts. How can I accomplish that?
Best Answer
You can try to type mount -a just after your system has booted.
If this workaround works, you can set up a script whose content is just something like "sleep 60 && mount -a" and make it executed at boot time (via cron, systemctl or any other mean).
It's really dirty, the good solution would be to investigate why some fs do not mount correctly.