You're getting multiple DMAR errors. That's related to your IOMMU
setup, which would be affected if you're turning VT on and off in the
BIOS.
That's not really LVM so much as it is something trying to remap your
storage device's PCI link after the filesystem was mounted. (Whether by
LVM, systemd, mount cmd from the terminal, etc.)
Which will cause the underlying block device to become unresponsive.
Even worse, it can make the FS get stuck unmounting and prevent a
reboot from succeeding after all of the consoles have been killed.
Requiring someone to power cycle the machine manually if it cannot be
fenced via some power distribution unit. (Speaking from experience
here...)
As for the issue itself, there's a couple of things you can try:
Try booting the machine in question with "intel_iommu=on iommu=pt" on
the kernel command line. That will put the IOMMU in passthrough mode
which may help.
Try moving the physical drives to a different port on the motherboard.
Some boards have different IOMMU groups for different ports even if
they are of the same kind. Regardless if it's AHCI / M.2 / etc.
If you have a real PCI RAID expansion card or something similar, you
could try checking the PCI link id it's using and moving it to another
link that does work. (Plug it into another PCI slot so it gets a
different IOMMU group assignment.)
If you're willing to spend money, maybe try getting a PCI AHCI / RAID
expansion card if you don't have one. That would at least give you more
options if you cannot move the drives to a different port.
Long term, the best option would be to move those gluster bricks to
another host that isn't acting as a VM hypervisor. These kinds of bugs
can crop up with kernel updates, and as the kernel's IOMMU support is
still kinda iffy, production-wise it's better to avoid the issue
entirely.
-Patrick Hibbs
On Wed, 2022-02-02 at 12:51 +0000, Strahil Nikolov via Users wrote:
Most probably when virtualization is enabled vdsm services can start
and they create a lvm filter for your Gluster bricks.
Boot the system (most probably with virtualization disabled), move
your entry from /etc/fstab to a dedicated '.mount' unit and boot with
the virt enabled.
Once booted with the flag enabled -> check the situation (for example
blacklist local disks in /etc/multipath/conf.d/blacklist.conf, check
and adjust the LVM filter, etc).
Best Regards,
Strahil Nikolov
> On Wed, Feb 2, 2022 at 11:52, eevans(a)digitaldatatechs.com
> <eevans(a)digitaldatatechs.com> wrote:
> My setup is 3 ovirt nodes that run gluster independently of the
> engine server, even thought the engine still controls it. So 4
> nodes, one engine and 3 clustered nodes.
> This has been and running with no issues except this:
> But now my arbiter node will not load the gluster drive when
> virtualization is enable in the BIOS. I've been scratching my head
> on this and need some direction.
> I am attaching the error.
>
>
https://1drv.ms/u/s!AvgvEzKKSZHbhMRQmUHDvv_Xv7dkhw?e=QGdfYR
>
> Keep in mind, this error does not occur is VT is turned off..it
> boots normally.
>
> Thanks in advance.
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement:
https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
>
https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
>
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HXRDM6W3IRT...
_______________________________________________
Users mailing list -- users(a)ovirt.org
To unsubscribe send an email to users-leave(a)ovirt.org
Privacy Statement:
https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2EK2SJK3VTQ...