I have Fix my issue with the following step to create a LVM resource. sdb
is my shared disk I represent from the iscsi
hosts.
[root@rhel-1 ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 20G 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 19G 0 part
├─rhel-root 253:0 0 17G 0 lvm /
└─rhel-swap 253:1 0 2G 0 lvm [SWAP]
sdb 8:16 0 50G 0 disk
sr0 11:0 1 3.8G 0 rom /mnt
Then I created a new partition for sdb
.
[root@rhel-1 ~]# fdisk /dev/sdb
Welcome to fdisk (util-linux 2.23.2).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
Device does not contain a recognized partition table
Building a new DOS disklabel with disk identifier 0xf8a80986.
Command (m for help): p
Disk /dev/sdb: 53.7 GB, 53687091200 bytes, 104857600 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 33550336 bytes
Disk label type: dos
Disk identifier: 0xf8a80986
Device Boot Start End Blocks Id System
Command (m for help): n
Partition type:
p primary (0 primary, 0 extended, 4 free)
e extended
Select (default p): p
Partition number (1-4, default 1):
First sector (65528-104857599, default 65528):
Using default value 65528
Last sector, +sectors or +size{K,M,G} (65528-104857599, default 104857599):
Using default value 104857599
Partition 1 of type Linux and of size 50 GiB is set
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
[root@rhel-1 ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 20G 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 19G 0 part
├─rhel-root 253:0 0 17G 0 lvm /
└─rhel-swap 253:1 0 2G 0 lvm [SWAP]
sdb 8:16 0 50G 0 disk
└─sdb1 8:17 0 50G 0 part
sr0 11:0 1 3.8G 0 rom /mnt
Then I created physical volume, volume group and logical volume.
[root@rhel-1 ~]# pvcreate /dev/sdb1
Physical volume "/dev/sdb1" successfully created.
[root@rhel-1 ~]# vgcreate cluster_vg /dev/sdb1
Volume group "cluster_vg" successfully created
[root@rhel-1 ~]# lvcreate -L 40G -n cluster_lv cluster_vg
Logical volume "cluster_lv" created.
Create an ext4 file system on the logical volume cluster_lv.
[root@rhel-1 ~]# mkfs.ext4 /dev/mapper/cluster_vg-cluster_lv
mke2fs 1.42.9 (28-Dec-2013)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=8191 blocks
2621440 inodes, 10485760 blocks
524288 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=2157969408
320 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624
Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
After that I need to make an Exclusive activation for Volume group in a cluster, but before that I need to ensure that locking_type is set to 1
and use_lvmetad
is set to 0
in the /etc/lvm/lvm.conf
file. I use the following command to make the changes on lvm.conf
file to apply on both nodes.
[root@rhel-1 ~]# lvmconf --enable-halvm --services --startstopservices
Warning: Stopping lvm2-lvmetad.service, but it can still be activated by lvm2-lvmetad.socket
Removed symlink /etc/systemd/system/sysinit.target.wants/lvm2-lvmetad.socket.
After that I need to make sure to add the volume groups other than my cluster vg
as entries to volume_list in the /etc/lvm/lvm.con
f. I have made this changes on my both nodes.
[root@rhel-1 ~]# grep "volume_list = " /etc/lvm/lvm.conf
# volume_list = [ "vg1", "vg2/lvol1", "@tag1", "@*" ]
volume_list = [ "rhel" ]
# auto_activation_volume_list = [ "vg1", "vg2/lvol1", "@tag1", "@*" ]
# read_only_volume_list = [ "vg1", "vg2/lvol1", "@tag1", "@*" ]
Rebuild the initramfs boot image to guarantee that the boot image will not try to activate a volume group controlled by the cluster. Also reboot is required after rebuild the initramfs.
[root@rhel-1 ~]# dracut -f -v
[root@rhel-1 ~]# grub2-mkconfig -o /boot/grub2/grub.cfg
[root@rhel-1 ~]# init 6
Create the LVM resource
[root@rhel-1 ~]# pcs resource create db2inst1_lvm LVM volgrpname=cluster_vg exclusive=true
Assumed agent name 'ocf:heartbeat:LVM' (deduced from 'LVM')
pcs status
[root@rhel-1 ~]# pcs status
Cluster name: juriscluster
Stack: corosync
Current DC: rhel-1 (version 1.1.16-12.el7_4.8-94ff4df) - partition with quorum
Last updated: Thu Mar 15 14:27:16 2018
Last change: Thu Mar 15 14:14:33 2018 by root via cibadmin on rhel-1
2 nodes configured
2 resources configured
Online: [ rhel-1 rhel-2 ]
Full list of resources:
db2inst1_scsi (stonith:fence_scsi): Started rhel-1
db2inst1_lvm (ocf::heartbeat:LVM): Started rhel-2
Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled
Best Answer
you said that you have san storage, then you create a partion for fencing and use it as scsi stonith, il will solve your problem, like this exemple:
and don't forget to enable stonith with
pcs property set stonith-enabled=true