On Tue, Oct 13, 2020 at 5:51 PM Nir Soffer <nsoffer(a)redhat.com> wrote:
On Tue, Oct 13, 2020 at 4:20 PM Yedidyah Bar David
<didi(a)redhat.com>
wrote:
>
> On Tue, Oct 13, 2020 at 1:11 PM Yedidyah Bar David <didi(a)redhat.com>
wrote:
> >
> > On Thu, Oct 8, 2020 at 1:01 PM gantonjo-ovirt--- via Users
> > <users(a)ovirt.org> wrote:
> > >
> > > So, we have a cluster of 3 servers running oVirt Node 4.4.1. Now we
are attempting to upgrade it to latest version, 4.4.2, but it fails as
shown below. Problem is that the Storage domains listed are all located on
an external iSCSI SAN. The Storage Domains were created in another cluster
we had (oVirt Node 4.3 based) and detached from the old cluster and
imported successfully into the new cluster through the oVirt Management
interface. As I understand, oVirt itself has created the mount points under
/rhev/data-center/mnt/blockSD/ for each of the iSCSI domains, and as such
they are not really storaged domains on the / filesystem.
> > >
> > > I do believe the solution to the mentioned BugZilla bug has caused a
new bug, but I may be wrong. I cannot see what we have done wrong when
importing these storage domains to the cluster (well, actually, some were
freshly created in this cluster, thus fully managed by oVirt 4.4 manager
interface).
> >
> > This is likely caused by the fix for:
> >
https://bugzilla.redhat.com/show_bug.cgi?id=1850378 .
> >
> > Adding Nir.
> >
> > >
> > > What can we do to proceed in upgrading the hosts to latest oVirt
Node?
> >
> > Right now, without another fix? Make sure that the following command:
> >
> > find / -xdev -path "*/dom_md/metadata" -not -empty
> >
> > Returns an empty output.
> >
> > You might need to move the host to maintenance and then manually
> > umount your SDs, or something like that.
> >
> > Please open a bug so that we can refine this command further.
>
> Nir (Levy) - perhaps we should change this command to something like:
>
> find / -xdev -path "*/dom_md/metadata" -not -empty -not -type l
>
> >
> > Thanks and best regards,
> >
> > >
> > > Dependencies resolved.
> > >
=============================================================================================================================================================================================================================================================================================================================
> > > Package
Architecture
Version
Repository
Size
> > >
=============================================================================================================================================================================================================================================================================================================================
> > > Upgrading:
> > > ovirt-node-ng-image-update
noarch
4.4.2-1.el8
ovirt-4.4
782 M
> > > replacing ovirt-node-ng-image-update-placeholder.noarch
4.4.1.5-1.el8
> > >
> > > Transaction Summary
> > >
=============================================================================================================================================================================================================================================================================================================================
> > > Upgrade 1 Package
> > >
> > > Total download size: 782 M
> > > Is this ok [y/N]: y
> > > Downloading Packages:
> > > ovirt-node-ng-image-update-4.4.2-1.el8.noarch.rpm
8.6 MB/s |
782 MB 01:31
> > >
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
> > > Total
8.6 MB/s |
782 MB 01:31
> > > Running transaction check
> > > Transaction check succeeded.
> > > Running transaction test
> > > Transaction test succeeded.
> > > Running transaction
> > > Preparing :
1/1
> > > Running scriptlet: ovirt-node-ng-image-update-4.4.2-1.el8.noarch
1/3
> > > Local storage domains were found on the same filesystem as / !
Please migrate the data to a new LV before upgrading, or you will lose the
VMs
> > > See:
https://bugzilla.redhat.com/show_bug.cgi?id=1550205#c3
> > > Storage domains were found in:
> > >
/rhev/data-center/mnt/blockSD/c3df4c98-ca97-4486-a5d4-d0321a0fb801/dom_md
> > >
/rhev/data-center/mnt/blockSD/90a52746-e0cb-4884-825d-32a9d94710ff/dom_md
> > >
/rhev/data-center/mnt/blockSD/74673f68-e1fa-46cf-b0ac-a35f05d42a7a/dom_md
> > >
/rhev/data-center/mnt/blockSD/f5fe00ba-c899-428f-96a2-e8d5e5707905/dom_md
> > >
/rhev/data-center/mnt/blockSD/5c3d9aff-66a3-4555-a17d-172fbf043505/dom_md
> > >
/rhev/data-center/mnt/blockSD/4cc6074b-a5f5-4337-a32f-0ace577e5e47/dom_md
> > >
/rhev/data-center/mnt/blockSD/a7658abd-e605-455e-9253-69d7e59ff50a/dom_md
> > >
/rhev/data-center/mnt/blockSD/f18e6e5c-124b-4a66-ae98-2088c87de42b/dom_md
> > >
/rhev/data-center/mnt/blockSD/f431e29b-77cd-4e51-8f7f-dd73543dfce6/dom_md
> > >
/rhev/data-center/mnt/blockSD/0f53281c-c756-4171-bcd2-8946956ebbd0/dom_md
> > >
/rhev/data-center/mnt/blockSD/9fad9f9b-c549-4226-9278-51208411b2ac/dom_md
> > >
/rhev/data-center/mnt/blockSD/c64006e7-e22c-486f-82a5-20d2b9431299/dom_md
> > >
/rhev/data-center/mnt/blockSD/509de8b4-bc41-40fa-9354-16c24ae16442/dom_md
> > >
/rhev/data-center/mnt/blockSD/0d57fcd3-4622-41cc-ab23-744b93d175a0/dom_md
>
> Adding also Nir Soffer for a workaround. Nir - is it safe, in this
> case, to remove
> these dead symlinks? Or even all of /rhev/data-center/mnt/blockSD/* ?
Will VDSM
> re-create them upon return from maintenance?
It is safe to remove /rhev/data-center/mnt/blockSD/* when the host is
in maintenance
but contents of /rhev/data-center is not meant be modified by users.
This area is menaged
by vdsm and nobody should touch it.
The issue is wrong search, looking into /rhev/* when looking for local
storage domains.
Local storage domains cannot be under /rhev.
Thanks @Nir Soffer <nsoffer(a)redhat.com>. Would your comment here:
https://bugzilla.redhat.com/show_bug.cgi?id=1883157#c6 will be it? or there
might be some other scenarios we should consider and ask QE to test? I
recall we discussed that It would be better to rely on vdsm to determined
whether the host is upgradable or not, but on this stage the vdsm daemon is
not active, would using some (new) code from vdsm source whould be prefered
here instead of updating the criterias for upgrade, or you do not expect
changes in the near future? Regards.
>
> Any other workaround?
>
> Thanks and best regards,
>
> > > error: %prein(ovirt-node-ng-image-update-4.4.2-1.el8.noarch)
scriptlet failed, exit status 1
> > >
> > > Error in PREIN scriptlet in rpm package ovirt-node-ng-image-update
> > > Verifying : ovirt-node-ng-image-update-4.4.2-1.el8.noarch
1/3
> > > Verifying :
ovirt-node-ng-image-update-4.4.1.5-1.el8.noarch
2/3
> > > Verifying :
ovirt-node-ng-image-update-placeholder-4.4.1.5-1.el8.noarch
3/3
> > > Unpersisting: ovirt-node-ng-image-update-4.4.1.5-1.el8.noarch.rpm
> > > Unpersisting:
ovirt-node-ng-image-update-placeholder-4.4.1.5-1.el8.noarch.rpm
> > >
> > >
> > > Thanks in advance for your good help.
> > > _______________________________________________
> > > Users mailing list -- users(a)ovirt.org
> > > To unsubscribe send an email to users-leave(a)ovirt.org
> > > Privacy Statement:
https://www.ovirt.org/privacy-policy.html
> > > oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
> > > List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/Z7AM2H5VZWY...
> >
> >
> >
> > --
> > Didi
>
>
>
> --
> Didi
>