Upgrade from ovirt-node-ng 4.4.1 to 4.4.2 fails with "Local storage domains were found on the same filesystem as / ! Please migrate the data to a new LV before upgrading, or you will lose the VMs"

So, we have a cluster of 3 servers running oVirt Node 4.4.1. Now we are attempting to upgrade it to latest version, 4.4.2, but it fails as shown below. Problem is that the Storage domains listed are all located on an external iSCSI SAN. The Storage Domains were created in another cluster we had (oVirt Node 4.3 based) and detached from the old cluster and imported successfully into the new cluster through the oVirt Management interface. As I understand, oVirt itself has created the mount points under /rhev/data-center/mnt/blockSD/ for each of the iSCSI domains, and as such they are not really storaged domains on the / filesystem. I do believe the solution to the mentioned BugZilla bug has caused a new bug, but I may be wrong. I cannot see what we have done wrong when importing these storage domains to the cluster (well, actually, some were freshly created in this cluster, thus fully managed by oVirt 4.4 manager interface). What can we do to proceed in upgrading the hosts to latest oVirt Node? Dependencies resolved. ============================================================================================================================================================================================================================================================================================================================= Package Architecture Version Repository Size ============================================================================================================================================================================================================================================================================================================================= Upgrading: ovirt-node-ng-image-update noarch 4.4.2-1.el8 ovirt-4.4 782 M replacing ovirt-node-ng-image-update-placeholder.noarch 4.4.1.5-1.el8 Transaction Summary ============================================================================================================================================================================================================================================================================================================================= Upgrade 1 Package Total download size: 782 M Is this ok [y/N]: y Downloading Packages: ovirt-node-ng-image-update-4.4.2-1.el8.noarch.rpm 8.6 MB/s | 782 MB 01:31 ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Total 8.6 MB/s | 782 MB 01:31 Running transaction check Transaction check succeeded. Running transaction test Transaction test succeeded. Running transaction Preparing : 1/1 Running scriptlet: ovirt-node-ng-image-update-4.4.2-1.el8.noarch 1/3 Local storage domains were found on the same filesystem as / ! Please migrate the data to a new LV before upgrading, or you will lose the VMs See: https://bugzilla.redhat.com/show_bug.cgi?id=1550205#c3 Storage domains were found in: /rhev/data-center/mnt/blockSD/c3df4c98-ca97-4486-a5d4-d0321a0fb801/dom_md /rhev/data-center/mnt/blockSD/90a52746-e0cb-4884-825d-32a9d94710ff/dom_md /rhev/data-center/mnt/blockSD/74673f68-e1fa-46cf-b0ac-a35f05d42a7a/dom_md /rhev/data-center/mnt/blockSD/f5fe00ba-c899-428f-96a2-e8d5e5707905/dom_md /rhev/data-center/mnt/blockSD/5c3d9aff-66a3-4555-a17d-172fbf043505/dom_md /rhev/data-center/mnt/blockSD/4cc6074b-a5f5-4337-a32f-0ace577e5e47/dom_md /rhev/data-center/mnt/blockSD/a7658abd-e605-455e-9253-69d7e59ff50a/dom_md /rhev/data-center/mnt/blockSD/f18e6e5c-124b-4a66-ae98-2088c87de42b/dom_md /rhev/data-center/mnt/blockSD/f431e29b-77cd-4e51-8f7f-dd73543dfce6/dom_md /rhev/data-center/mnt/blockSD/0f53281c-c756-4171-bcd2-8946956ebbd0/dom_md /rhev/data-center/mnt/blockSD/9fad9f9b-c549-4226-9278-51208411b2ac/dom_md /rhev/data-center/mnt/blockSD/c64006e7-e22c-486f-82a5-20d2b9431299/dom_md /rhev/data-center/mnt/blockSD/509de8b4-bc41-40fa-9354-16c24ae16442/dom_md /rhev/data-center/mnt/blockSD/0d57fcd3-4622-41cc-ab23-744b93d175a0/dom_md error: %prein(ovirt-node-ng-image-update-4.4.2-1.el8.noarch) scriptlet failed, exit status 1 Error in PREIN scriptlet in rpm package ovirt-node-ng-image-update Verifying : ovirt-node-ng-image-update-4.4.2-1.el8.noarch 1/3 Verifying : ovirt-node-ng-image-update-4.4.1.5-1.el8.noarch 2/3 Verifying : ovirt-node-ng-image-update-placeholder-4.4.1.5-1.el8.noarch 3/3 Unpersisting: ovirt-node-ng-image-update-4.4.1.5-1.el8.noarch.rpm Unpersisting: ovirt-node-ng-image-update-placeholder-4.4.1.5-1.el8.noarch.rpm Thanks in advance for your good help.

On Thu, Oct 8, 2020 at 1:01 PM gantonjo-ovirt--- via Users <users@ovirt.org> wrote:
So, we have a cluster of 3 servers running oVirt Node 4.4.1. Now we are attempting to upgrade it to latest version, 4.4.2, but it fails as shown below. Problem is that the Storage domains listed are all located on an external iSCSI SAN. The Storage Domains were created in another cluster we had (oVirt Node 4.3 based) and detached from the old cluster and imported successfully into the new cluster through the oVirt Management interface. As I understand, oVirt itself has created the mount points under /rhev/data-center/mnt/blockSD/ for each of the iSCSI domains, and as such they are not really storaged domains on the / filesystem.
I do believe the solution to the mentioned BugZilla bug has caused a new bug, but I may be wrong. I cannot see what we have done wrong when importing these storage domains to the cluster (well, actually, some were freshly created in this cluster, thus fully managed by oVirt 4.4 manager interface).
This is likely caused by the fix for: https://bugzilla.redhat.com/show_bug.cgi?id=1850378 . Adding Nir.
What can we do to proceed in upgrading the hosts to latest oVirt Node?
Right now, without another fix? Make sure that the following command: find / -xdev -path "*/dom_md/metadata" -not -empty Returns an empty output. You might need to move the host to maintenance and then manually umount your SDs, or something like that. Please open a bug so that we can refine this command further. Thanks and best regards,
Dependencies resolved. ============================================================================================================================================================================================================================================================================================================================= Package Architecture Version Repository Size ============================================================================================================================================================================================================================================================================================================================= Upgrading: ovirt-node-ng-image-update noarch 4.4.2-1.el8 ovirt-4.4 782 M replacing ovirt-node-ng-image-update-placeholder.noarch 4.4.1.5-1.el8
Transaction Summary ============================================================================================================================================================================================================================================================================================================================= Upgrade 1 Package
Total download size: 782 M Is this ok [y/N]: y Downloading Packages: ovirt-node-ng-image-update-4.4.2-1.el8.noarch.rpm 8.6 MB/s | 782 MB 01:31 ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Total 8.6 MB/s | 782 MB 01:31 Running transaction check Transaction check succeeded. Running transaction test Transaction test succeeded. Running transaction Preparing : 1/1 Running scriptlet: ovirt-node-ng-image-update-4.4.2-1.el8.noarch 1/3 Local storage domains were found on the same filesystem as / ! Please migrate the data to a new LV before upgrading, or you will lose the VMs See: https://bugzilla.redhat.com/show_bug.cgi?id=1550205#c3 Storage domains were found in: /rhev/data-center/mnt/blockSD/c3df4c98-ca97-4486-a5d4-d0321a0fb801/dom_md /rhev/data-center/mnt/blockSD/90a52746-e0cb-4884-825d-32a9d94710ff/dom_md /rhev/data-center/mnt/blockSD/74673f68-e1fa-46cf-b0ac-a35f05d42a7a/dom_md /rhev/data-center/mnt/blockSD/f5fe00ba-c899-428f-96a2-e8d5e5707905/dom_md /rhev/data-center/mnt/blockSD/5c3d9aff-66a3-4555-a17d-172fbf043505/dom_md /rhev/data-center/mnt/blockSD/4cc6074b-a5f5-4337-a32f-0ace577e5e47/dom_md /rhev/data-center/mnt/blockSD/a7658abd-e605-455e-9253-69d7e59ff50a/dom_md /rhev/data-center/mnt/blockSD/f18e6e5c-124b-4a66-ae98-2088c87de42b/dom_md /rhev/data-center/mnt/blockSD/f431e29b-77cd-4e51-8f7f-dd73543dfce6/dom_md /rhev/data-center/mnt/blockSD/0f53281c-c756-4171-bcd2-8946956ebbd0/dom_md /rhev/data-center/mnt/blockSD/9fad9f9b-c549-4226-9278-51208411b2ac/dom_md /rhev/data-center/mnt/blockSD/c64006e7-e22c-486f-82a5-20d2b9431299/dom_md /rhev/data-center/mnt/blockSD/509de8b4-bc41-40fa-9354-16c24ae16442/dom_md /rhev/data-center/mnt/blockSD/0d57fcd3-4622-41cc-ab23-744b93d175a0/dom_md error: %prein(ovirt-node-ng-image-update-4.4.2-1.el8.noarch) scriptlet failed, exit status 1
Error in PREIN scriptlet in rpm package ovirt-node-ng-image-update Verifying : ovirt-node-ng-image-update-4.4.2-1.el8.noarch 1/3 Verifying : ovirt-node-ng-image-update-4.4.1.5-1.el8.noarch 2/3 Verifying : ovirt-node-ng-image-update-placeholder-4.4.1.5-1.el8.noarch 3/3 Unpersisting: ovirt-node-ng-image-update-4.4.1.5-1.el8.noarch.rpm Unpersisting: ovirt-node-ng-image-update-placeholder-4.4.1.5-1.el8.noarch.rpm
Thanks in advance for your good help. _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/Z7AM2H5VZWYZMV...
-- Didi

Hi, Didi. Thanks for your answer. Unfortunately the suggested command shows the same storage domains as I listed above. The node is in Maintenance (as it would be when Installing an update from the GUI). None of these storage domains are mounted on the node ( at least not visible when running command "mount"), thus I am not able to unmount them. I guess they are visible to the node's OS due to the fact that they are iSCSI domains, even if the node itself makes no use of them. That said, looking at the files listed by the find command you gave me, the storage domains all have links to non-existent /dev/ locations, like the following: /rhev/data-center/mnt/blockSD/0d57fcd3-4622-41cc-ab23-744b93d175a0/dom_md/: total 0 lrwxrwxrwx. 1 vdsm kvm 45 Oct 6 09:36 ids -> /dev/0d57fcd3-4622-41cc-ab23-744b93d175a0/ids lrwxrwxrwx. 1 vdsm kvm 47 Oct 6 09:36 inbox -> /dev/0d57fcd3-4622-41cc-ab23-744b93d175a0/inbox lrwxrwxrwx. 1 vdsm kvm 48 Oct 6 09:36 leases -> /dev/0d57fcd3-4622-41cc-ab23-744b93d175a0/leases lrwxrwxrwx. 1 vdsm kvm 48 Oct 6 09:36 master -> /dev/0d57fcd3-4622-41cc-ab23-744b93d175a0/master lrwxrwxrwx. 1 vdsm kvm 50 Oct 6 09:36 metadata -> /dev/0d57fcd3-4622-41cc-ab23-744b93d175a0/metadata lrwxrwxrwx. 1 vdsm kvm 48 Oct 6 09:36 outbox -> /dev/0d57fcd3-4622-41cc-ab23-744b93d175a0/outbox lrwxrwxrwx. 1 vdsm kvm 49 Oct 6 09:36 xleases -> /dev/0d57fcd3-4622-41cc-ab23-744b93d175a0/xleases ls -l /dev/0d57fcd3-4622-41cc-ab23-744b93d175a0/ids ls: cannot access '/dev/0d57fcd3-4622-41cc-ab23-744b93d175a0/ids': No such file or directory I guess I can remove the export from the iSCSI SAN towards the node, reboot the node and then try to upgrade the node via "dnf update". However, the node will then not be able to serve the storage domains to it's VMs when taken out of Maintenance.

Hi again. Just did a test. I unmapped the node from the iSCSI SAN and rebooted the node. After reboot, the storage domains still where listed as / storage domains. In other words, this was not a solution.

On Tue, Oct 13, 2020 at 1:11 PM Yedidyah Bar David <didi@redhat.com> wrote:
On Thu, Oct 8, 2020 at 1:01 PM gantonjo-ovirt--- via Users <users@ovirt.org> wrote:
So, we have a cluster of 3 servers running oVirt Node 4.4.1. Now we are attempting to upgrade it to latest version, 4.4.2, but it fails as shown below. Problem is that the Storage domains listed are all located on an external iSCSI SAN. The Storage Domains were created in another cluster we had (oVirt Node 4.3 based) and detached from the old cluster and imported successfully into the new cluster through the oVirt Management interface. As I understand, oVirt itself has created the mount points under /rhev/data-center/mnt/blockSD/ for each of the iSCSI domains, and as such they are not really storaged domains on the / filesystem.
I do believe the solution to the mentioned BugZilla bug has caused a new bug, but I may be wrong. I cannot see what we have done wrong when importing these storage domains to the cluster (well, actually, some were freshly created in this cluster, thus fully managed by oVirt 4.4 manager interface).
This is likely caused by the fix for: https://bugzilla.redhat.com/show_bug.cgi?id=1850378 .
Adding Nir.
What can we do to proceed in upgrading the hosts to latest oVirt Node?
Right now, without another fix? Make sure that the following command:
find / -xdev -path "*/dom_md/metadata" -not -empty
Returns an empty output.
You might need to move the host to maintenance and then manually umount your SDs, or something like that.
Please open a bug so that we can refine this command further.
Nir (Levy) - perhaps we should change this command to something like: find / -xdev -path "*/dom_md/metadata" -not -empty -not -type l
Thanks and best regards,
Dependencies resolved. ============================================================================================================================================================================================================================================================================================================================= Package Architecture Version Repository Size ============================================================================================================================================================================================================================================================================================================================= Upgrading: ovirt-node-ng-image-update noarch 4.4.2-1.el8 ovirt-4.4 782 M replacing ovirt-node-ng-image-update-placeholder.noarch 4.4.1.5-1.el8
Transaction Summary ============================================================================================================================================================================================================================================================================================================================= Upgrade 1 Package
Total download size: 782 M Is this ok [y/N]: y Downloading Packages: ovirt-node-ng-image-update-4.4.2-1.el8.noarch.rpm 8.6 MB/s | 782 MB 01:31 ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Total 8.6 MB/s | 782 MB 01:31 Running transaction check Transaction check succeeded. Running transaction test Transaction test succeeded. Running transaction Preparing : 1/1 Running scriptlet: ovirt-node-ng-image-update-4.4.2-1.el8.noarch 1/3 Local storage domains were found on the same filesystem as / ! Please migrate the data to a new LV before upgrading, or you will lose the VMs See: https://bugzilla.redhat.com/show_bug.cgi?id=1550205#c3 Storage domains were found in: /rhev/data-center/mnt/blockSD/c3df4c98-ca97-4486-a5d4-d0321a0fb801/dom_md /rhev/data-center/mnt/blockSD/90a52746-e0cb-4884-825d-32a9d94710ff/dom_md /rhev/data-center/mnt/blockSD/74673f68-e1fa-46cf-b0ac-a35f05d42a7a/dom_md /rhev/data-center/mnt/blockSD/f5fe00ba-c899-428f-96a2-e8d5e5707905/dom_md /rhev/data-center/mnt/blockSD/5c3d9aff-66a3-4555-a17d-172fbf043505/dom_md /rhev/data-center/mnt/blockSD/4cc6074b-a5f5-4337-a32f-0ace577e5e47/dom_md /rhev/data-center/mnt/blockSD/a7658abd-e605-455e-9253-69d7e59ff50a/dom_md /rhev/data-center/mnt/blockSD/f18e6e5c-124b-4a66-ae98-2088c87de42b/dom_md /rhev/data-center/mnt/blockSD/f431e29b-77cd-4e51-8f7f-dd73543dfce6/dom_md /rhev/data-center/mnt/blockSD/0f53281c-c756-4171-bcd2-8946956ebbd0/dom_md /rhev/data-center/mnt/blockSD/9fad9f9b-c549-4226-9278-51208411b2ac/dom_md /rhev/data-center/mnt/blockSD/c64006e7-e22c-486f-82a5-20d2b9431299/dom_md /rhev/data-center/mnt/blockSD/509de8b4-bc41-40fa-9354-16c24ae16442/dom_md /rhev/data-center/mnt/blockSD/0d57fcd3-4622-41cc-ab23-744b93d175a0/dom_md
Adding also Nir Soffer for a workaround. Nir - is it safe, in this case, to remove these dead symlinks? Or even all of /rhev/data-center/mnt/blockSD/* ? Will VDSM re-create them upon return from maintenance? Any other workaround? Thanks and best regards,
error: %prein(ovirt-node-ng-image-update-4.4.2-1.el8.noarch) scriptlet failed, exit status 1
Error in PREIN scriptlet in rpm package ovirt-node-ng-image-update Verifying : ovirt-node-ng-image-update-4.4.2-1.el8.noarch 1/3 Verifying : ovirt-node-ng-image-update-4.4.1.5-1.el8.noarch 2/3 Verifying : ovirt-node-ng-image-update-placeholder-4.4.1.5-1.el8.noarch 3/3 Unpersisting: ovirt-node-ng-image-update-4.4.1.5-1.el8.noarch.rpm Unpersisting: ovirt-node-ng-image-update-placeholder-4.4.1.5-1.el8.noarch.rpm
Thanks in advance for your good help. _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/Z7AM2H5VZWYZMV...
-- Didi
-- Didi

Hi, Didi. The command "find / -xdev -path "*/dom_md/metadata" -not -empty -not -type l" shows no storage domains, so I guess this would solve the "false positive" list of storage domains that are not really mounted to the node we want to update.

On Tue, Oct 13, 2020 at 4:20 PM Yedidyah Bar David <didi@redhat.com> wrote:
On Tue, Oct 13, 2020 at 1:11 PM Yedidyah Bar David <didi@redhat.com> wrote:
On Thu, Oct 8, 2020 at 1:01 PM gantonjo-ovirt--- via Users <users@ovirt.org> wrote:
So, we have a cluster of 3 servers running oVirt Node 4.4.1. Now we are attempting to upgrade it to latest version, 4.4.2, but it fails as shown below. Problem is that the Storage domains listed are all located on an external iSCSI SAN. The Storage Domains were created in another cluster we had (oVirt Node 4.3 based) and detached from the old cluster and imported successfully into the new cluster through the oVirt Management interface. As I understand, oVirt itself has created the mount points under /rhev/data-center/mnt/blockSD/ for each of the iSCSI domains, and as such they are not really storaged domains on the / filesystem.
I do believe the solution to the mentioned BugZilla bug has caused a new bug, but I may be wrong. I cannot see what we have done wrong when importing these storage domains to the cluster (well, actually, some were freshly created in this cluster, thus fully managed by oVirt 4.4 manager interface).
This is likely caused by the fix for: https://bugzilla.redhat.com/show_bug.cgi?id=1850378 .
Adding Nir.
What can we do to proceed in upgrading the hosts to latest oVirt Node?
Right now, without another fix? Make sure that the following command:
find / -xdev -path "*/dom_md/metadata" -not -empty
Returns an empty output.
You might need to move the host to maintenance and then manually umount your SDs, or something like that.
Please open a bug so that we can refine this command further.
Nir (Levy) - perhaps we should change this command to something like:
find / -xdev -path "*/dom_md/metadata" -not -empty -not -type l
Thanks and best regards,
Dependencies resolved. ============================================================================================================================================================================================================================================================================================================================= Package Architecture Version Repository Size ============================================================================================================================================================================================================================================================================================================================= Upgrading: ovirt-node-ng-image-update noarch 4.4.2-1.el8 ovirt-4.4 782 M replacing ovirt-node-ng-image-update-placeholder.noarch 4.4.1.5-1.el8
Transaction Summary ============================================================================================================================================================================================================================================================================================================================= Upgrade 1 Package
Total download size: 782 M Is this ok [y/N]: y Downloading Packages: ovirt-node-ng-image-update-4.4.2-1.el8.noarch.rpm 8.6 MB/s | 782 MB 01:31 ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Total 8.6 MB/s | 782 MB 01:31 Running transaction check Transaction check succeeded. Running transaction test Transaction test succeeded. Running transaction Preparing : 1/1 Running scriptlet: ovirt-node-ng-image-update-4.4.2-1.el8.noarch 1/3 Local storage domains were found on the same filesystem as / ! Please migrate the data to a new LV before upgrading, or you will lose the VMs See: https://bugzilla.redhat.com/show_bug.cgi?id=1550205#c3 Storage domains were found in: /rhev/data-center/mnt/blockSD/c3df4c98-ca97-4486-a5d4-d0321a0fb801/dom_md /rhev/data-center/mnt/blockSD/90a52746-e0cb-4884-825d-32a9d94710ff/dom_md /rhev/data-center/mnt/blockSD/74673f68-e1fa-46cf-b0ac-a35f05d42a7a/dom_md /rhev/data-center/mnt/blockSD/f5fe00ba-c899-428f-96a2-e8d5e5707905/dom_md /rhev/data-center/mnt/blockSD/5c3d9aff-66a3-4555-a17d-172fbf043505/dom_md /rhev/data-center/mnt/blockSD/4cc6074b-a5f5-4337-a32f-0ace577e5e47/dom_md /rhev/data-center/mnt/blockSD/a7658abd-e605-455e-9253-69d7e59ff50a/dom_md /rhev/data-center/mnt/blockSD/f18e6e5c-124b-4a66-ae98-2088c87de42b/dom_md /rhev/data-center/mnt/blockSD/f431e29b-77cd-4e51-8f7f-dd73543dfce6/dom_md /rhev/data-center/mnt/blockSD/0f53281c-c756-4171-bcd2-8946956ebbd0/dom_md /rhev/data-center/mnt/blockSD/9fad9f9b-c549-4226-9278-51208411b2ac/dom_md /rhev/data-center/mnt/blockSD/c64006e7-e22c-486f-82a5-20d2b9431299/dom_md /rhev/data-center/mnt/blockSD/509de8b4-bc41-40fa-9354-16c24ae16442/dom_md /rhev/data-center/mnt/blockSD/0d57fcd3-4622-41cc-ab23-744b93d175a0/dom_md
Adding also Nir Soffer for a workaround. Nir - is it safe, in this case, to remove these dead symlinks? Or even all of /rhev/data-center/mnt/blockSD/* ? Will VDSM re-create them upon return from maintenance?
It is safe to remove /rhev/data-center/mnt/blockSD/* when the host is in maintenance but contents of /rhev/data-center is not meant be modified by users. This area is menaged by vdsm and nobody should touch it. The issue is wrong search, looking into /rhev/* when looking for local storage domains. Local storage domains cannot be under /rhev.
Any other workaround?
Thanks and best regards,
error: %prein(ovirt-node-ng-image-update-4.4.2-1.el8.noarch) scriptlet failed, exit status 1
Error in PREIN scriptlet in rpm package ovirt-node-ng-image-update Verifying : ovirt-node-ng-image-update-4.4.2-1.el8.noarch 1/3 Verifying : ovirt-node-ng-image-update-4.4.1.5-1.el8.noarch 2/3 Verifying : ovirt-node-ng-image-update-placeholder-4.4.1.5-1.el8.noarch 3/3 Unpersisting: ovirt-node-ng-image-update-4.4.1.5-1.el8.noarch.rpm Unpersisting: ovirt-node-ng-image-update-placeholder-4.4.1.5-1.el8.noarch.rpm
Thanks in advance for your good help. _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/Z7AM2H5VZWYZMV...
-- Didi
-- Didi

On Tue, Oct 13, 2020 at 5:51 PM Nir Soffer <nsoffer@redhat.com> wrote:
On Tue, Oct 13, 2020 at 4:20 PM Yedidyah Bar David <didi@redhat.com> wrote:
On Tue, Oct 13, 2020 at 1:11 PM Yedidyah Bar David <didi@redhat.com>
On Thu, Oct 8, 2020 at 1:01 PM gantonjo-ovirt--- via Users <users@ovirt.org> wrote:
So, we have a cluster of 3 servers running oVirt Node 4.4.1. Now we
are attempting to upgrade it to latest version, 4.4.2, but it fails as shown below. Problem is that the Storage domains listed are all located on an external iSCSI SAN. The Storage Domains were created in another cluster we had (oVirt Node 4.3 based) and detached from the old cluster and imported successfully into the new cluster through the oVirt Management interface. As I understand, oVirt itself has created the mount points under /rhev/data-center/mnt/blockSD/ for each of the iSCSI domains, and as such
wrote: they are not really storaged domains on the / filesystem.
I do believe the solution to the mentioned BugZilla bug has caused a
new bug, but I may be wrong. I cannot see what we have done wrong when importing these storage domains to the cluster (well, actually, some were freshly created in this cluster, thus fully managed by oVirt 4.4 manager interface).
This is likely caused by the fix for: https://bugzilla.redhat.com/show_bug.cgi?id=1850378 .
Adding Nir.
What can we do to proceed in upgrading the hosts to latest oVirt
Node?
Right now, without another fix? Make sure that the following command:
find / -xdev -path "*/dom_md/metadata" -not -empty
Returns an empty output.
You might need to move the host to maintenance and then manually umount your SDs, or something like that.
Please open a bug so that we can refine this command further.
Nir (Levy) - perhaps we should change this command to something like:
find / -xdev -path "*/dom_md/metadata" -not -empty -not -type l
Thanks and best regards,
Dependencies resolved.
=============================================================================================================================================================================================================================================================================================================================
Package Architecture Version Repository Size
=============================================================================================================================================================================================================================================================================================================================
Upgrading: ovirt-node-ng-image-update noarch 4.4.2-1.el8 ovirt-4.4 782 M replacing ovirt-node-ng-image-update-placeholder.noarch 4.4.1.5-1.el8
Transaction Summary
=============================================================================================================================================================================================================================================================================================================================
Upgrade 1 Package
Total download size: 782 M Is this ok [y/N]: y Downloading Packages: ovirt-node-ng-image-update-4.4.2-1.el8.noarch.rpm
8.6 MB/s | 782 MB 01:31
Total
8.6 MB/s | 782 MB 01:31
Running transaction check Transaction check succeeded. Running transaction test Transaction test succeeded. Running transaction Preparing :
1/1
Running scriptlet: ovirt-node-ng-image-update-4.4.2-1.el8.noarch
1/3
Local storage domains were found on the same filesystem as / ! Please migrate the data to a new LV before upgrading, or you will lose the VMs See: https://bugzilla.redhat.com/show_bug.cgi?id=1550205#c3 Storage domains were found in:
/rhev/data-center/mnt/blockSD/c3df4c98-ca97-4486-a5d4-d0321a0fb801/dom_md
/rhev/data-center/mnt/blockSD/90a52746-e0cb-4884-825d-32a9d94710ff/dom_md
/rhev/data-center/mnt/blockSD/74673f68-e1fa-46cf-b0ac-a35f05d42a7a/dom_md
/rhev/data-center/mnt/blockSD/f5fe00ba-c899-428f-96a2-e8d5e5707905/dom_md
/rhev/data-center/mnt/blockSD/5c3d9aff-66a3-4555-a17d-172fbf043505/dom_md
/rhev/data-center/mnt/blockSD/4cc6074b-a5f5-4337-a32f-0ace577e5e47/dom_md
/rhev/data-center/mnt/blockSD/a7658abd-e605-455e-9253-69d7e59ff50a/dom_md
/rhev/data-center/mnt/blockSD/f18e6e5c-124b-4a66-ae98-2088c87de42b/dom_md
/rhev/data-center/mnt/blockSD/f431e29b-77cd-4e51-8f7f-dd73543dfce6/dom_md
/rhev/data-center/mnt/blockSD/0f53281c-c756-4171-bcd2-8946956ebbd0/dom_md
/rhev/data-center/mnt/blockSD/9fad9f9b-c549-4226-9278-51208411b2ac/dom_md
/rhev/data-center/mnt/blockSD/c64006e7-e22c-486f-82a5-20d2b9431299/dom_md
/rhev/data-center/mnt/blockSD/509de8b4-bc41-40fa-9354-16c24ae16442/dom_md
/rhev/data-center/mnt/blockSD/0d57fcd3-4622-41cc-ab23-744b93d175a0/dom_md
Adding also Nir Soffer for a workaround. Nir - is it safe, in this case, to remove these dead symlinks? Or even all of /rhev/data-center/mnt/blockSD/* ? Will VDSM re-create them upon return from maintenance?
It is safe to remove /rhev/data-center/mnt/blockSD/* when the host is in maintenance but contents of /rhev/data-center is not meant be modified by users. This area is menaged by vdsm and nobody should touch it.
The issue is wrong search, looking into /rhev/* when looking for local storage domains. Local storage domains cannot be under /rhev.
Thanks @Nir Soffer <nsoffer@redhat.com>. Would your comment here: https://bugzilla.redhat.com/show_bug.cgi?id=1883157#c6 will be it? or there might be some other scenarios we should consider and ask QE to test? I recall we discussed that It would be better to rely on vdsm to determined whether the host is upgradable or not, but on this stage the vdsm daemon is not active, would using some (new) code from vdsm source whould be prefered here instead of updating the criterias for upgrade, or you do not expect changes in the near future? Regards.
Any other workaround?
Thanks and best regards,
error: %prein(ovirt-node-ng-image-update-4.4.2-1.el8.noarch)
scriptlet failed, exit status 1
Error in PREIN scriptlet in rpm package ovirt-node-ng-image-update Verifying : ovirt-node-ng-image-update-4.4.2-1.el8.noarch
1/3
Verifying : ovirt-node-ng-image-update-4.4.1.5-1.el8.noarch
2/3
Verifying : ovirt-node-ng-image-update-placeholder-4.4.1.5-1.el8.noarch
3/3
Unpersisting: ovirt-node-ng-image-update-4.4.1.5-1.el8.noarch.rpm Unpersisting: ovirt-node-ng-image-update-placeholder-4.4.1.5-1.el8.noarch.rpm
Thanks in advance for your good help. _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/Z7AM2H5VZWYZMV...
-- Didi
-- Didi

On Tue, Oct 13, 2020 at 5:56 PM Nir Levy <nlevy@redhat.com> wrote:
On Tue, Oct 13, 2020 at 5:51 PM Nir Soffer <nsoffer@redhat.com> wrote:
On Tue, Oct 13, 2020 at 4:20 PM Yedidyah Bar David <didi@redhat.com> wrote:
On Tue, Oct 13, 2020 at 1:11 PM Yedidyah Bar David <didi@redhat.com> wrote:
On Thu, Oct 8, 2020 at 1:01 PM gantonjo-ovirt--- via Users <users@ovirt.org> wrote:
So, we have a cluster of 3 servers running oVirt Node 4.4.1. Now we are attempting to upgrade it to latest version, 4.4.2, but it fails as shown below. Problem is that the Storage domains listed are all located on an external iSCSI SAN. The Storage Domains were created in another cluster we had (oVirt Node 4.3 based) and detached from the old cluster and imported successfully into the new cluster through the oVirt Management interface. As I understand, oVirt itself has created the mount points under /rhev/data-center/mnt/blockSD/ for each of the iSCSI domains, and as such they are not really storaged domains on the / filesystem.
I do believe the solution to the mentioned BugZilla bug has caused a new bug, but I may be wrong. I cannot see what we have done wrong when importing these storage domains to the cluster (well, actually, some were freshly created in this cluster, thus fully managed by oVirt 4.4 manager interface).
This is likely caused by the fix for: https://bugzilla.redhat.com/show_bug.cgi?id=1850378 .
Adding Nir.
What can we do to proceed in upgrading the hosts to latest oVirt Node?
Right now, without another fix? Make sure that the following command:
find / -xdev -path "*/dom_md/metadata" -not -empty
Returns an empty output.
You might need to move the host to maintenance and then manually umount your SDs, or something like that.
Please open a bug so that we can refine this command further.
Nir (Levy) - perhaps we should change this command to something like:
find / -xdev -path "*/dom_md/metadata" -not -empty -not -type l
Thanks and best regards,
Dependencies resolved. ============================================================================================================================================================================================================================================================================================================================= Package Architecture Version Repository Size ============================================================================================================================================================================================================================================================================================================================= Upgrading: ovirt-node-ng-image-update noarch 4.4.2-1.el8 ovirt-4.4 782 M replacing ovirt-node-ng-image-update-placeholder.noarch 4.4.1.5-1.el8
Transaction Summary ============================================================================================================================================================================================================================================================================================================================= Upgrade 1 Package
Total download size: 782 M Is this ok [y/N]: y Downloading Packages: ovirt-node-ng-image-update-4.4.2-1.el8.noarch.rpm 8.6 MB/s | 782 MB 01:31 ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Total 8.6 MB/s | 782 MB 01:31 Running transaction check Transaction check succeeded. Running transaction test Transaction test succeeded. Running transaction Preparing : 1/1 Running scriptlet: ovirt-node-ng-image-update-4.4.2-1.el8.noarch 1/3 Local storage domains were found on the same filesystem as / ! Please migrate the data to a new LV before upgrading, or you will lose the VMs See: https://bugzilla.redhat.com/show_bug.cgi?id=1550205#c3 Storage domains were found in: /rhev/data-center/mnt/blockSD/c3df4c98-ca97-4486-a5d4-d0321a0fb801/dom_md /rhev/data-center/mnt/blockSD/90a52746-e0cb-4884-825d-32a9d94710ff/dom_md /rhev/data-center/mnt/blockSD/74673f68-e1fa-46cf-b0ac-a35f05d42a7a/dom_md /rhev/data-center/mnt/blockSD/f5fe00ba-c899-428f-96a2-e8d5e5707905/dom_md /rhev/data-center/mnt/blockSD/5c3d9aff-66a3-4555-a17d-172fbf043505/dom_md /rhev/data-center/mnt/blockSD/4cc6074b-a5f5-4337-a32f-0ace577e5e47/dom_md /rhev/data-center/mnt/blockSD/a7658abd-e605-455e-9253-69d7e59ff50a/dom_md /rhev/data-center/mnt/blockSD/f18e6e5c-124b-4a66-ae98-2088c87de42b/dom_md /rhev/data-center/mnt/blockSD/f431e29b-77cd-4e51-8f7f-dd73543dfce6/dom_md /rhev/data-center/mnt/blockSD/0f53281c-c756-4171-bcd2-8946956ebbd0/dom_md /rhev/data-center/mnt/blockSD/9fad9f9b-c549-4226-9278-51208411b2ac/dom_md /rhev/data-center/mnt/blockSD/c64006e7-e22c-486f-82a5-20d2b9431299/dom_md /rhev/data-center/mnt/blockSD/509de8b4-bc41-40fa-9354-16c24ae16442/dom_md /rhev/data-center/mnt/blockSD/0d57fcd3-4622-41cc-ab23-744b93d175a0/dom_md
Adding also Nir Soffer for a workaround. Nir - is it safe, in this case, to remove these dead symlinks? Or even all of /rhev/data-center/mnt/blockSD/* ? Will VDSM re-create them upon return from maintenance?
It is safe to remove /rhev/data-center/mnt/blockSD/* when the host is in maintenance but contents of /rhev/data-center is not meant be modified by users. This area is menaged by vdsm and nobody should touch it.
The issue is wrong search, looking into /rhev/* when looking for local storage domains. Local storage domains cannot be under /rhev.
Thanks @Nir Soffer. Would your comment here: https://bugzilla.redhat.com/show_bug.cgi?id=1883157#c6 will be it?
Will look at it
or there might be some other scenarios we should consider and ask QE to test? I recall we discussed that It would be better to rely on vdsm to determined whether the host is upgradable or not, but on this stage the vdsm daemon is not active, would using some (new) code from vdsm source whould be prefered here instead of updating the criterias for upgrade, or you do not expect changes in the near future?
I don't expect changes, but the right place for code looking up localfs storage domains on a host is vdsm. This way changes in vdsm will keep this code working. Something like: vdsm-tool list-localfs-storage-domains [ { "path": ..., "name": ... } ... ] With this you can show a useful error message about the storage domain. I think we should add this in vdsm for 4.4.4, for now we need some quick solution to unbreak 4.4.3.
Any other workaround?
Thanks and best regards,
error: %prein(ovirt-node-ng-image-update-4.4.2-1.el8.noarch) scriptlet failed, exit status 1
Error in PREIN scriptlet in rpm package ovirt-node-ng-image-update Verifying : ovirt-node-ng-image-update-4.4.2-1.el8.noarch 1/3 Verifying : ovirt-node-ng-image-update-4.4.1.5-1.el8.noarch 2/3 Verifying : ovirt-node-ng-image-update-placeholder-4.4.1.5-1.el8.noarch 3/3 Unpersisting: ovirt-node-ng-image-update-4.4.1.5-1.el8.noarch.rpm Unpersisting: ovirt-node-ng-image-update-placeholder-4.4.1.5-1.el8.noarch.rpm
Thanks in advance for your good help. _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/Z7AM2H5VZWYZMV...
-- Didi
-- Didi

Thanks for your answer. Yesterday we removed all content from /rhev/data-center/mnt/blockSD/* on our nodes (in Local Maintenance mode) and manually updated the nodes with "dnf update". After a reboot and a night's sleep, all 12 nodes are reporting up to date status. That is, we are now on 4.4.2. Looking forward to see the error corrected in a future release.
participants (4)
-
gantonjo-ovirt@yahoo.com
-
Nir Levy
-
Nir Soffer
-
Yedidyah Bar David