I think I may have just messed up my cluster.
I'm running an older 4.4.2.6 cluster on CentOS-8 with 4 nodes and a
self-hosted engine. I wanted to assemble the spare drives on 3 of the 4
nodes into a new gluster volume for extra VM storage.
Unfortunately, I did not look closely enough at one of the nodes before
running sfdisk+parted+pvcreate, and now it looks like I may have broken
my onn storage. pvs shows missing uuids:
# pvs
WARNING: Couldn't find device with uuid
RgTaWg-fR1T-J3Nv-uh03-ZTi5-jz9X-cjl1lo
.
WARNING: Couldn't find device with uuid
0l9CFI-Z7pP-x1P8-AJ78-gRoz-ql0e-2gzXsC
.
WARNING: Couldn't find device with uuid
fl73h2-ztyn-y9NY-4TF4-K2Pd-G2Ow-vH46yH
.
WARNING: VG onn_ovirt1 is missing PV
RgTaWg-fR1T-J3Nv-uh03-ZTi5-jz9X-cjl1lo (l
ast written to /dev/nvme0n1p3).
WARNING: VG onn_ovirt1 is missing PV
0l9CFI-Z7pP-x1P8-AJ78-gRoz-ql0e-2gzXsC (l
ast written to /dev/nvme1n1p1).
WARNING: VG onn_ovirt1 is missing PV
fl73h2-ztyn-y9NY-4TF4-K2Pd-G2Ow-vH46yH (l
ast written to /dev/nvme2n1p1).
PV VG Fmt Attr PSize PFree
/dev/md2 vg00 lvm2 a-- <928.80g 0
/dev/nvme2n1p1 gluster_vg_nvme2n1p1 lvm2 a-- 2.91t 0
/dev/nvme3n1p1 onn_ovirt1 lvm2 a-- 2.91t 0
[unknown] onn_ovirt1 lvm2 a-m 929.92g 100.00g
[unknown] onn_ovirt1 lvm2 a-m <931.51g 0
[unknown] onn_ovirt1 lvm2 a-m 2.91t 0
Here's what I don't understand:
* This onn volume group only existed on one of the 4 nodes. I expected
it would have been on all 4?
* lsblk and /etc/fstab don't show any reference to onn
* What is the ONN volume group used for, and how bad is it if it's now
missing? I note that my VMs all continue to run and I've been able to
migrate them off of this affected node with no apparent problems.
* Is it possible that this onn volume group was already broken before I
messed with the nvme3n1 disk? When ovirt was originally installed
several years ago, I went through the install process multiple times and
might not have cleaned up properly each time.
--Mike
Show replies by date