Issue: Device path changed after adding disks to guest VM

Hi All, I'm facing the problem that after adding disks to guest VM, the device target path changed (My ovirt version is 4.3). For example: Before adding a disk: virsh # domblklist <vmname> Target Source --------------------------------------------------------- hdc - vda /dev/mapper/3600a09803830386546244a546d494f53 vdb /dev/mapper/3600a09803830386546244a546d494f54 * vdc /dev/mapper/3600a09803830386546244a546d494f55* vdd /dev/mapper/3600a09803830386546244a546d494f56 vde /dev/mapper/3600a09803830386546244a546d494f57 vdf /dev/mapper/3600a09803830386546244a546d494f58 After adding a disk, and then shutdown and start the VM: virsh # domblklist <vmname> Target Source --------------------------------------------------------- hdc - vda /dev/mapper/3600a09803830386546244a546d494f53 vdb /dev/mapper/3600a09803830386546244a546d494f54 *vdc /dev/mapper/3600a09803830386546244a546d494f6c* * vdd /dev/mapper/3600a09803830386546244a546d494f55* vde /dev/mapper/3600a09803830386546244a546d494f56 vdf /dev/mapper/3600a09803830386546244a546d494f57 vdg /dev/mapper/3600a09803830386546244a546d494f58 The devices' multipath doesn't map to the same target path as before, so in my VM the /dev/vdc doesn't point to the old /dev/mapper/3600a09803830386546244a546d494f55 anymore. Anybody knows how can I make the device path mapping fixed without being changed after adding or removing disks. Many thanks in advance. Joy

On Wed, Dec 2, 2020 at 10:27 AM Joy Li <joooy.li@gmail.com> wrote:
Hi All,
I'm facing the problem that after adding disks to guest VM, the device target path changed (My ovirt version is 4.3). For example:
Before adding a disk:
virsh # domblklist <vmname> Target Source --------------------------------------------------------- hdc - vda /dev/mapper/3600a09803830386546244a546d494f53 vdb /dev/mapper/3600a09803830386546244a546d494f54 vdc /dev/mapper/3600a09803830386546244a546d494f55 vdd /dev/mapper/3600a09803830386546244a546d494f56 vde /dev/mapper/3600a09803830386546244a546d494f57 vdf /dev/mapper/3600a09803830386546244a546d494f58
After adding a disk, and then shutdown and start the VM:
virsh # domblklist <vmname> Target Source --------------------------------------------------------- hdc - vda /dev/mapper/3600a09803830386546244a546d494f53 vdb /dev/mapper/3600a09803830386546244a546d494f54 vdc /dev/mapper/3600a09803830386546244a546d494f6c vdd /dev/mapper/3600a09803830386546244a546d494f55 vde /dev/mapper/3600a09803830386546244a546d494f56 vdf /dev/mapper/3600a09803830386546244a546d494f57 vdg /dev/mapper/3600a09803830386546244a546d494f58
The devices' multipath doesn't map to the same target path as before, so in my VM the /dev/vdc doesn't point to the old /dev/mapper/3600a09803830386546244a546d494f55 anymore.
Anybody knows how can I make the device path mapping fixed without being changed after adding or removing disks.
Device nodes are not stable, and oVirt cannot guarantee that you will get the same node in the guest in all runs. You should use /dev/disk/by-id/xxx links to located devices, and blkid to create fstab mounts that do not depend on node names. Regardless, oVirt try to keep devices stable as possible. Do you know how to reproduce this issue reliably? Nir

Thanks a lot Nir! Good to know that oVirt cannot guarantee the disk names so that I don't need to spend more time trying to enable such a feature. I can always reproduce the problem via my application, basically, the procedure is as following: 1. create a VM 2. add disks to the VM (e.g.: disk names: disk1, disk3) 3. check the disk mappings via `virsh domblklist ` 4. add another disk (let's say, disk2, give a name alphabetically before some existing disks) 5. shutdown the VM via hypervisor and start it again (reboot won't work) 6. `virsh domblklist` again, then you might see the problem I mentioned before There are no virtio devices inside /dev/disk/by-id/xxx of my guest VM. And I just noticed that the disks mapping information given by hypervisor (from VM configuration or virsh command) is different from the reality inside the VM. The disk name inside the VM was actually not changed. So now my issue is that given a disk name (/dev/vdb) of a VM, how can I get its wwid? Before I got it from the hypervisor, but now the hypervisor's information is not reliable, and since the disk is unformatted, I cannot use UUID. Joy On Wed, Dec 2, 2020 at 1:28 PM Nir Soffer <nsoffer@redhat.com> wrote:
On Wed, Dec 2, 2020 at 10:27 AM Joy Li <joooy.li@gmail.com> wrote:
Hi All,
I'm facing the problem that after adding disks to guest VM, the device
target path changed (My ovirt version is 4.3). For example:
Before adding a disk:
virsh # domblklist <vmname> Target Source --------------------------------------------------------- hdc - vda /dev/mapper/3600a09803830386546244a546d494f53 vdb /dev/mapper/3600a09803830386546244a546d494f54 vdc /dev/mapper/3600a09803830386546244a546d494f55 vdd /dev/mapper/3600a09803830386546244a546d494f56 vde /dev/mapper/3600a09803830386546244a546d494f57 vdf /dev/mapper/3600a09803830386546244a546d494f58
After adding a disk, and then shutdown and start the VM:
virsh # domblklist <vmname> Target Source --------------------------------------------------------- hdc - vda /dev/mapper/3600a09803830386546244a546d494f53 vdb /dev/mapper/3600a09803830386546244a546d494f54 vdc /dev/mapper/3600a09803830386546244a546d494f6c vdd /dev/mapper/3600a09803830386546244a546d494f55 vde /dev/mapper/3600a09803830386546244a546d494f56 vdf /dev/mapper/3600a09803830386546244a546d494f57 vdg /dev/mapper/3600a09803830386546244a546d494f58
The devices' multipath doesn't map to the same target path as before, so
in my VM the /dev/vdc doesn't point to the old /dev/mapper/3600a09803830386546244a546d494f55 anymore.
Anybody knows how can I make the device path mapping fixed without being
changed after adding or removing disks.
Device nodes are not stable, and oVirt cannot guarantee that you will get the same node in the guest in all runs.
You should use /dev/disk/by-id/xxx links to located devices, and blkid to create fstab mounts that do not depend on node names.
Regardless, oVirt try to keep devices stable as possible. Do you know how to reproduce this issue reliably?
Nir

On Wed, Dec 2, 2020 at 4:57 PM Joy Li <joooy.li@gmail.com> wrote:
Thanks a lot Nir! Good to know that oVirt cannot guarantee the disk names so that I don't need to spend more time trying to enable such a feature.
I can always reproduce the problem via my application, basically, the procedure is as following:
1. create a VM
Which guest OS? Can you share the guest disk image or ISO image used to instal it?
2. add disks to the VM (e.g.: disk names: disk1, disk3) 3. check the disk mappings via `virsh domblklist `
Please shared the libvirt domain xml (virsh dumpxml vm-name)
4. add another disk (let's say, disk2, give a name alphabetically before some existing disks)
Add disk while the vm is running (hotplug)?
5. shutdown the VM via hypervisor and start it again (reboot won't work)
What do you mean by "reboot does not work?"
6. `virsh domblklist` again, then you might see the problem I mentioned before
Mapping is different compared with state before the reboot?
There are no virtio devices inside /dev/disk/by-id/xxx of my guest VM.
Maybe you don't have systemd-udev installed? The links in /dev/disk/... are created by udev during startup, and when detecting a new disk.
And I just noticed that the disks mapping information given by hypervisor (from VM configuration or virsh command) is different from the reality inside the VM. The disk name inside the VM was actually not changed.
So now my issue is that given a disk name (/dev/vdb) of a VM, how can I get its wwid? Before I got it from the hypervisor, but now the hypervisor's information is not reliable, and since the disk is unformatted, I cannot use UUID.
You can use: # udevadm info /dev/vda P: /devices/pci0000:00/0000:00:02.5/0000:06:00.0/virtio4/block/vda N: vda L: 0 S: disk/by-path/pci-0000:06:00.0 S: disk/by-id/virtio-b97e68b2-87ea-45ca-9 S: disk/by-path/virtio-pci-0000:06:00.0 E: DEVPATH=/devices/pci0000:00/0000:00:02.5/0000:06:00.0/virtio4/block/vda E: DEVNAME=/dev/vda E: DEVTYPE=disk E: MAJOR=252 E: MINOR=0 E: SUBSYSTEM=block E: USEC_INITIALIZED=10518442 E: ID_SERIAL=b97e68b2-87ea-45ca-9 E: ID_PATH=pci-0000:06:00.0 E: ID_PATH_TAG=pci-0000_06_00_0 E: DEVLINKS=/dev/disk/by-path/pci-0000:06:00.0 /dev/disk/by-id/virtio-b97e68b2-87ea-45ca-9 /dev/disk/by-path/virtio-pci-0000:06:00.0 E: TAGS=:systemd: I tried to reproduce you issue 4.4.5 development build: Starting vm with 2 direct LUN disks: # virsh -r dumpxml disk-mapping ... <disk type='block' device='disk' snapshot='no'> <driver name='qemu' type='qcow2' cache='none' error_policy='stop' io='native' discard='unmap'/> <source dev='/rhev/data-center/mnt/blockSD/84dc4e3c-00fd-4263-84e8-fc246eeee6e9/images/40018b33-2b11-4d10-82e4-604a5b135fb2/40f455c4-8c92-4f8f-91c2-991b0ddfc2f5' index='3'> <seclabel model='dac' relabel='no'/> </source> <backingStore type='block' index='5'> <format type='qcow2'/> <source dev='/rhev/data-center/mnt/blockSD/84dc4e3c-00fd-4263-84e8-fc246eeee6e9/images/40018b33-2b11-4d10-82e4-604a5b135fb2/ad891316-b80a-4ab5-895b-400108bd0ca1'> <seclabel model='dac' relabel='no'/> </source> <backingStore/> </backingStore> <target dev='sda' bus='scsi'/> <serial>40018b33-2b11-4d10-82e4-604a5b135fb2</serial> <boot order='1'/> <alias name='ua-40018b33-2b11-4d10-82e4-604a5b135fb2'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <disk type='block' device='disk' snapshot='no'> <driver name='qemu' type='raw' cache='none' error_policy='stop' io='native' iothread='1'/> <source dev='/dev/mapper/3600140594af345ed76d42058f2b1a454' index='2'> <seclabel model='dac' relabel='no'/> </source> <backingStore/> <target dev='vda' bus='virtio'/> <serial>b97e68b2-87ea-45ca-94fb-277d5b30baa2</serial> <alias name='ua-b97e68b2-87ea-45ca-94fb-277d5b30baa2'/> <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/> </disk> <disk type='block' device='disk' snapshot='no'> <driver name='qemu' type='raw' cache='none' error_policy='stop' io='native' iothread='1'/> <source dev='/dev/mapper/360014050058f2f8a0474dc7a8a7cc6a5' index='1'> <seclabel model='dac' relabel='no'/> </source> <backingStore/> <target dev='vdb' bus='virtio'/> <serial>d9a29187-f492-4a0d-aea2-7d5216c957d7</serial> <alias name='ua-d9a29187-f492-4a0d-aea2-7d5216c957d7'/> <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/> </disk> ... # virsh -r domblklist disk-mapping Target Source --------------------------------------------------------------------------------------------------------------------------------------------------------------- sdc - sda /rhev/data-center/mnt/blockSD/84dc4e3c-00fd-4263-84e8-fc246eeee6e9/images/40018b33-2b11-4d10-82e4-604a5b135fb2/40f455c4-8c92-4f8f-91c2-991b0ddfc2f5 vda /dev/mapper/3600140594af345ed76d42058f2b1a454 vdb /dev/mapper/360014050058f2f8a0474dc7a8a7cc6a5 In guest: # ls -lh /dev/disk/by-id/virtio-* lrwxrwxrwx. 1 root root 9 Jan 6 09:42 /dev/disk/by-id/virtio-b97e68b2-87ea-45ca-9 -> ../../vda lrwxrwxrwx. 1 root root 9 Jan 6 09:42 /dev/disk/by-id/virtio-d9a29187-f492-4a0d-a -> ../../vdb NOTE: "d9a29187-f492-4a0d-a" are the first characters of the disk id: "d9a29187-f492-4a0d-aea2-7d5216c957d7" seen in oVirt: https://my-engine/ovirt-engine/webadmin/?locale=en_US#disks-general;id=d9a29... Adding another disk in sorted in the middle (while the vm is running): # virsh -r dumpxml disk-mapping ... <disk type='block' device='disk' snapshot='no'> <driver name='qemu' type='qcow2' cache='none' error_policy='stop' io='native' discard='unmap'/> <source dev='/rhev/data-center/mnt/blockSD/84dc4e3c-00fd-4263-84e8-fc246eeee6e9/images/40018b33-2b11-4d10-82e4-604a5b135fb2/40f455c4-8c92-4f8f-91c2-991b0ddfc2f5' index='3'> <seclabel model='dac' relabel='no'/> </source> <backingStore type='block' index='5'> <format type='qcow2'/> <source dev='/rhev/data-center/mnt/blockSD/84dc4e3c-00fd-4263-84e8-fc246eeee6e9/images/40018b33-2b11-4d10-82e4-604a5b135fb2/ad891316-b80a-4ab5-895b-400108bd0ca1'> <seclabel model='dac' relabel='no'/> </source> <backingStore/> </backingStore> <target dev='sda' bus='scsi'/> <serial>40018b33-2b11-4d10-82e4-604a5b135fb2</serial> <boot order='1'/> <alias name='ua-40018b33-2b11-4d10-82e4-604a5b135fb2'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <disk type='block' device='disk' snapshot='no'> <driver name='qemu' type='raw' cache='none' error_policy='stop' io='native' iothread='1'/> <source dev='/dev/mapper/3600140594af345ed76d42058f2b1a454' index='2'> <seclabel model='dac' relabel='no'/> </source> <backingStore/> <target dev='vda' bus='virtio'/> <serial>b97e68b2-87ea-45ca-94fb-277d5b30baa2</serial> <alias name='ua-b97e68b2-87ea-45ca-94fb-277d5b30baa2'/> <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/> </disk> <disk type='block' device='disk' snapshot='no'> <driver name='qemu' type='raw' cache='none' error_policy='stop' io='native' iothread='1'/> <source dev='/dev/mapper/360014050058f2f8a0474dc7a8a7cc6a5' index='1'> <seclabel model='dac' relabel='no'/> </source> <backingStore/> <target dev='vdb' bus='virtio'/> <serial>d9a29187-f492-4a0d-aea2-7d5216c957d7</serial> <alias name='ua-d9a29187-f492-4a0d-aea2-7d5216c957d7'/> <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/> </disk> <disk type='block' device='disk' snapshot='no'> <driver name='qemu' type='raw' cache='none' error_policy='stop' io='native' iothread='1'/> <source dev='/dev/mapper/36001405b4d0c0b7544d47438b21296ef' index='7'> <seclabel model='dac' relabel='no'/> </source> <backingStore/> <target dev='vdc' bus='virtio'/> <serial>e801c2e4-dc2e-4c53-b17b-bf6de99f16ed</serial> <alias name='ua-e801c2e4-dc2e-4c53-b17b-bf6de99f16ed'/> <address type='pci' domain='0x0000' bus='0x09' slot='0x00' function='0x0'/> </disk> ... # virsh -r domblklist disk-mapping Target Source --------------------------------------------------------------------------------------------------------------------------------------------------------------- sdc - sda /rhev/data-center/mnt/blockSD/84dc4e3c-00fd-4263-84e8-fc246eeee6e9/images/40018b33-2b11-4d10-82e4-604a5b135fb2/40f455c4-8c92-4f8f-91c2-991b0ddfc2f5 vda /dev/mapper/3600140594af345ed76d42058f2b1a454 vdb /dev/mapper/360014050058f2f8a0474dc7a8a7cc6a5 vdc /dev/mapper/36001405b4d0c0b7544d47438b21296ef In the guest: # ls -lh /dev/disk/by-id/virtio-* lrwxrwxrwx. 1 root root 9 Jan 6 09:42 /dev/disk/by-id/virtio-b97e68b2-87ea-45ca-9 -> ../../vda lrwxrwxrwx. 1 root root 9 Jan 6 09:42 /dev/disk/by-id/virtio-d9a29187-f492-4a0d-a -> ../../vdb lrwxrwxrwx. 1 root root 9 Jan 6 09:51 /dev/disk/by-id/virtio-e801c2e4-dc2e-4c53-b -> ../../vdc Shutdown VM and start it again # virsh -r dumpxml disk-mapping ... <disk type='block' device='disk' snapshot='no'> <driver name='qemu' type='qcow2' cache='none' error_policy='stop' io='native' discard='unmap'/> <source dev='/rhev/data-center/mnt/blockSD/84dc4e3c-00fd-4263-84e8-fc246eeee6e9/images/40018b33-2b11-4d10-82e4-604a5b135fb2/40f455c4-8c92-4f8f-91c2-991b0ddfc2f5' index='4'> <seclabel model='dac' relabel='no'/> </source> <backingStore type='block' index='6'> <format type='qcow2'/> <source dev='/rhev/data-center/mnt/blockSD/84dc4e3c-00fd-4263-84e8-fc246eeee6e9/images/40018b33-2b11-4d10-82e4-604a5b135fb2/ad891316-b80a-4ab5-895b-400108bd0ca1'> <seclabel model='dac' relabel='no'/> </source> <backingStore/> </backingStore> <target dev='sda' bus='scsi'/> <serial>40018b33-2b11-4d10-82e4-604a5b135fb2</serial> <boot order='1'/> <alias name='ua-40018b33-2b11-4d10-82e4-604a5b135fb2'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <disk type='block' device='disk' snapshot='no'> <driver name='qemu' type='raw' cache='none' error_policy='stop' io='native' iothread='1'/> <source dev='/dev/mapper/3600140594af345ed76d42058f2b1a454' index='3'> <seclabel model='dac' relabel='no'/> </source> <backingStore/> <target dev='vda' bus='virtio'/> <serial>b97e68b2-87ea-45ca-94fb-277d5b30baa2</serial> <alias name='ua-b97e68b2-87ea-45ca-94fb-277d5b30baa2'/> <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/> </disk> <disk type='block' device='disk' snapshot='no'> <driver name='qemu' type='raw' cache='none' error_policy='stop' io='native' iothread='1'/> <source dev='/dev/mapper/36001405b4d0c0b7544d47438b21296ef' index='2'> <seclabel model='dac' relabel='no'/> </source> <backingStore/> <target dev='vdb' bus='virtio'/> <serial>e801c2e4-dc2e-4c53-b17b-bf6de99f16ed</serial> <alias name='ua-e801c2e4-dc2e-4c53-b17b-bf6de99f16ed'/> <address type='pci' domain='0x0000' bus='0x09' slot='0x00' function='0x0'/> </disk> <disk type='block' device='disk' snapshot='no'> <driver name='qemu' type='raw' cache='none' error_policy='stop' io='native' iothread='1'/> <source dev='/dev/mapper/360014050058f2f8a0474dc7a8a7cc6a5' index='1'> <seclabel model='dac' relabel='no'/> </source> <backingStore/> <target dev='vdc' bus='virtio'/> <serial>d9a29187-f492-4a0d-aea2-7d5216c957d7</serial> <alias name='ua-d9a29187-f492-4a0d-aea2-7d5216c957d7'/> <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/> </disk> ... # virsh -r domblklist disk-mapping Target Source --------------------------------------------------------------------------------------------------------------------------------------------------------------- sdc - sda /rhev/data-center/mnt/blockSD/84dc4e3c-00fd-4263-84e8-fc246eeee6e9/images/40018b33-2b11-4d10-82e4-604a5b135fb2/40f455c4-8c92-4f8f-91c2-991b0ddfc2f5 vda /dev/mapper/3600140594af345ed76d42058f2b1a454 vdb /dev/mapper/36001405b4d0c0b7544d47438b21296ef vdc /dev/mapper/360014050058f2f8a0474dc7a8a7cc6a5 In the guest: # ls -lh /dev/disk/by-id/virtio-* lrwxrwxrwx. 1 root root 9 Jan 6 09:55 /dev/disk/by-id/virtio-b97e68b2-87ea-45ca-9 -> ../../vda lrwxrwxrwx. 1 root root 9 Jan 6 09:55 /dev/disk/by-id/virtio-d9a29187-f492-4a0d-a -> ../../vdb lrwxrwxrwx. 1 root root 9 Jan 6 09:55 /dev/disk/by-id/virtio-e801c2e4-dc2e-4c53-b -> ../../vdc Comparing to state before reboot: # virsh -r domblklist disk-mapping Target Source --------------------------------------------------------------------------------------------------------------------------------------------------------------- sdc - sda /rhev/data-center/mnt/blockSD/84dc4e3c-00fd-4263-84e8-fc246eeee6e9/images/40018b33-2b11-4d10-82e4-604a5b135fb2/40f455c4-8c92-4f8f-91c2-991b0ddfc2f5 vda /dev/mapper/3600140594af345ed76d42058f2b1a454 vdb /dev/mapper/360014050058f2f8a0474dc7a8a7cc6a5 vdc /dev/mapper/36001405b4d0c0b7544d47438b21296ef # ls -lh /dev/disk/by-id/virtio-* lrwxrwxrwx. 1 root root 9 Jan 6 09:42 /dev/disk/by-id/virtio-b97e68b2-87ea-45ca-9 -> ../../vda lrwxrwxrwx. 1 root root 9 Jan 6 09:42 /dev/disk/by-id/virtio-d9a29187-f492-4a0d-a -> ../../vdb lrwxrwxrwx. 1 root root 9 Jan 6 09:51 /dev/disk/by-id/virtio-e801c2e4-dc2e-4c53-b -> ../../vdc In the guest disks are mapped to the same device name. It looks like libivrt domblklist is not correct - vdb and vdc are switched. Peter, this expected? Nir
Joy
On Wed, Dec 2, 2020 at 1:28 PM Nir Soffer <nsoffer@redhat.com> wrote:
On Wed, Dec 2, 2020 at 10:27 AM Joy Li <joooy.li@gmail.com> wrote:
Hi All,
I'm facing the problem that after adding disks to guest VM, the device target path changed (My ovirt version is 4.3). For example:
Before adding a disk:
virsh # domblklist <vmname> Target Source --------------------------------------------------------- hdc - vda /dev/mapper/3600a09803830386546244a546d494f53 vdb /dev/mapper/3600a09803830386546244a546d494f54 vdc /dev/mapper/3600a09803830386546244a546d494f55 vdd /dev/mapper/3600a09803830386546244a546d494f56 vde /dev/mapper/3600a09803830386546244a546d494f57 vdf /dev/mapper/3600a09803830386546244a546d494f58
After adding a disk, and then shutdown and start the VM:
virsh # domblklist <vmname> Target Source --------------------------------------------------------- hdc - vda /dev/mapper/3600a09803830386546244a546d494f53 vdb /dev/mapper/3600a09803830386546244a546d494f54 vdc /dev/mapper/3600a09803830386546244a546d494f6c vdd /dev/mapper/3600a09803830386546244a546d494f55 vde /dev/mapper/3600a09803830386546244a546d494f56 vdf /dev/mapper/3600a09803830386546244a546d494f57 vdg /dev/mapper/3600a09803830386546244a546d494f58
The devices' multipath doesn't map to the same target path as before, so in my VM the /dev/vdc doesn't point to the old /dev/mapper/3600a09803830386546244a546d494f55 anymore.
Anybody knows how can I make the device path mapping fixed without being changed after adding or removing disks.
Device nodes are not stable, and oVirt cannot guarantee that you will get the same node in the guest in all runs.
You should use /dev/disk/by-id/xxx links to located devices, and blkid to create fstab mounts that do not depend on node names.
Regardless, oVirt try to keep devices stable as possible. Do you know how to reproduce this issue reliably?
Nir

On Wed, Jan 06, 2021 at 17:16:24 +0200, Nir Soffer wrote:
On Wed, Dec 2, 2020 at 4:57 PM Joy Li <joooy.li@gmail.com> wrote:
[...]
Comparing to state before reboot:
# virsh -r domblklist disk-mapping Target Source --------------------------------------------------------------------------------------------------------------------------------------------------------------- sdc - sda /rhev/data-center/mnt/blockSD/84dc4e3c-00fd-4263-84e8-fc246eeee6e9/images/40018b33-2b11-4d10-82e4-604a5b135fb2/40f455c4-8c92-4f8f-91c2-991b0ddfc2f5 vda /dev/mapper/3600140594af345ed76d42058f2b1a454 vdb /dev/mapper/360014050058f2f8a0474dc7a8a7cc6a5 vdc /dev/mapper/36001405b4d0c0b7544d47438b21296ef
# ls -lh /dev/disk/by-id/virtio-* lrwxrwxrwx. 1 root root 9 Jan 6 09:42 /dev/disk/by-id/virtio-b97e68b2-87ea-45ca-9 -> ../../vda lrwxrwxrwx. 1 root root 9 Jan 6 09:42 /dev/disk/by-id/virtio-d9a29187-f492-4a0d-a -> ../../vdb lrwxrwxrwx. 1 root root 9 Jan 6 09:51 /dev/disk/by-id/virtio-e801c2e4-dc2e-4c53-b -> ../../vdc
In the guest disks are mapped to the same device name.
It looks like libivrt domblklist is not correct - vdb and vdc are switched. Peter, this expected?
The names in 'virsh domblklist' are unfortunately and confusingly chosen to match the expected /dev/ device node name, but it's at kernel's discretion to name /dev/ nodes. This means that it's not guaranteed that what you see in 'virsh domblklist' will match the state in the guest. In this case I think the reorder happens as the PCI address of vdb is larger than the address of vdc. A partial workaround can be to use the data provided by the qemu guest agent, for exampe via 'virsh domfsinfo': $ virsh domfsinfo fedora32 Mountpoint Name Type Target ------------------------------------ / dm-0 xfs vda /boot vda1 xfs vda Here the guest-host matching is established via the PCI address so the 'Target' field accurately refers to the target in the VM XML. Similarly the linux kernel recently changed enumeration of SCSI devices so they are not guaranteed to match either. Libguestfs for example needed a workaround. https://github.com/libguestfs/libguestfs/commit/bca9b94fc593771b3801b09b95e4...

On Wed, Jan 06, 2021 at 04:36:46PM +0100, Peter Krempa wrote:
On Wed, Jan 06, 2021 at 17:16:24 +0200, Nir Soffer wrote:
On Wed, Dec 2, 2020 at 4:57 PM Joy Li <joooy.li@gmail.com> wrote:
[...]
Comparing to state before reboot:
# virsh -r domblklist disk-mapping Target Source --------------------------------------------------------------------------------------------------------------------------------------------------------------- sdc - sda /rhev/data-center/mnt/blockSD/84dc4e3c-00fd-4263-84e8-fc246eeee6e9/images/40018b33-2b11-4d10-82e4-604a5b135fb2/40f455c4-8c92-4f8f-91c2-991b0ddfc2f5 vda /dev/mapper/3600140594af345ed76d42058f2b1a454 vdb /dev/mapper/360014050058f2f8a0474dc7a8a7cc6a5 vdc /dev/mapper/36001405b4d0c0b7544d47438b21296ef
# ls -lh /dev/disk/by-id/virtio-* lrwxrwxrwx. 1 root root 9 Jan 6 09:42 /dev/disk/by-id/virtio-b97e68b2-87ea-45ca-9 -> ../../vda lrwxrwxrwx. 1 root root 9 Jan 6 09:42 /dev/disk/by-id/virtio-d9a29187-f492-4a0d-a -> ../../vdb lrwxrwxrwx. 1 root root 9 Jan 6 09:51 /dev/disk/by-id/virtio-e801c2e4-dc2e-4c53-b -> ../../vdc
In the guest disks are mapped to the same device name.
It looks like libivrt domblklist is not correct - vdb and vdc are switched. Peter, this expected?
The names in 'virsh domblklist' are unfortunately and confusingly chosen to match the expected /dev/ device node name, but it's at kernel's discretion to name /dev/ nodes.
This means that it's not guaranteed that what you see in 'virsh domblklist' will match the state in the guest.
Essentially the only thing the disk device name is used for is sorting the <disk> elements within the XML document. This in turn affects what order PCI addresses (virtio-blk) or SCSI LUNS (virtio-scsi) are assigned in. This influences/hints as to what order the guest OS *might* assign device names in. The device name from the XML is not exposed to the guest directly though. Certainly when hotplugging/unplugging is involved all bets are off wrt what disk names you'll see in the guest vs the XML. Dont expect them to match except by luck. Regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|
participants (4)
-
Daniel P. Berrangé
-
Joy Li
-
Nir Soffer
-
Peter Krempa