On Thu, Feb 3, 2022 at 3:42 PM Richard W.M. Jones <rjones(a)redhat.com> wrote:
On Thu, Feb 03, 2022 at 03:07:20PM +0200, Nir Soffer wrote:
> On Thu, Feb 3, 2022 at 2:30 PM Richard W.M. Jones <rjones(a)redhat.com> wrote:
> >
> >
> > I'm following the instructions here:
> >
https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.2/...
> >
> > I've also added an extra virtual disk to my host node which appears as
> > /dev/sdb. Although the disk is partitioned, /dev/sdb1 is not created.
> > Is udev broken in oVirt node?
> >
> > I cannot see anywhere in the dialog where you specify the name of the
> > device (eg. "/dev/sdb1"). So how's it supposed to work?
> >
> > It doesn't work, giving an information-free error message:
> >
> > Error while executing action Add Storage Connection: Problem while trying to
mount target
>
> You can find more info on the failure in:
> /var/log/vdsm/supervdsmd.log
vdsm.storage.mount.MountError: Command ['/usr/bin/mount', '-t',
'xfs', '/srv', '/rhev/data-center/mnt/_srv'] failed with rc=32
out=b'' err=b'mount: /rhev/data-center/mnt/_srv: /srv is not a block
device.\n'
I suppose it expects the name of the block device (ie. /dev/sdb)
rather than the mount point there.
It also turns out the new device has been "captured" by multipathd:
# multipath -ll
0QEMU_QEMU_HARDDISK_drive-scsi0-0-0-1 dm-0 QEMU,QEMU HARDDISK
size=100G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=1 status=active
`- 0:0:0:1 sdb 8:16 active ready running
I've so far not found a way to disable multipathd effectively. Even
stopping and disabling the service and rebooting doesn't help so I
guess something starts it up.
This is expected on oVirt, our multipath configuration is intentionally grabbing
any device that multipath can work with, even if the device only has one path.
The motivation is to be able to configure a system when only one path is
available (maybe you have an hba/network/server issue), and once the other
paths are available the system will use them transparently.
To avoid this issue with local devices, you need to blacklist the device.
Add this file:
$ cat /etc/multipath/conf.d/local.conf
blacklist {
wwid "QEMU HARDDISK"
}
And run (as root):
multipathd reconfigure
At this point lsblk will show the expected /dev/sdb1 and multipath
will never use this device again.
Adding a serial to the device in libvirt xml will make it easier to blacklist.
> Posix compliant is basically NFS without some mount options:
>
https://github.com/oVirt/vdsm/blob/878407297cb7dc892110ae5d6b0403ca972492...
>
> Using a local device on a host is less tested path, I'm not QE is testing
> this (Avihai, please correct me if you do).
>
> If you have multiple hosts, this will break if the local device does not have
> the same name on all hosts (so using /dev/sdb1 is very fragile). If you have
> one host it can be fine.
>
> Any reason to add a device to the vm, instead of using an NFS server?
>
> I guess that your purpose is testing virt-v2v with oVirt, so you want to test
> a common configuration; NFS is very common for oVirt users.
I don't have an NFS server to use for this.
Configuring NFS on some other machine is easy.
I'm using another VM for this, so I can easily test negative flows like stopping
or restarting the NFS server while it is being used by vms or storage
operations.
I'm using 2G alpine vm for this, it works fine even with 1G memory.
Nir