On Sun, Oct 4, 2020 at 6:09 PM Amit Bawer <abawer(a)redhat.com> wrote:
On Sun, Oct 4, 2020 at 5:28 PM Gianluca Cecchi <gianluca.cecchi(a)gmail.com> wrote:
>
> On Sun, Oct 4, 2020 at 10:21 AM Amit Bawer <abawer(a)redhat.com> wrote:
>>
>>
>>
>> Since there wasn't a filter set on the node, the 4.4.2 update added the
default filter for the root-lv pv
>> if there was some filter set before the upgrade, it would not have been added by
the 4.4.2 update.
>>>
>>>
>
> Do you mean that I will get the same problem upgrading from 4.4.2 to an upcoming
4.4.3, as also now I don't have any filter set?
> This would not be desirable....
Once you have got back into 4.4.2, it's recommended to set the lvm filter to fit the
pvs you use on your node
for the local root pv you can run
# vdsm-tool config-lvm-filter -y
For the gluster bricks you'll need to add their uuids to the filter as well.
vdsm-tool is expected to add all the devices needed by the mounted
logical volumes, so adding devices manually should not be needed.
If this does not work please file a bug and include all the info to reproduce
the issue.
The next upgrade should not set a filter on its own if one is already
set.
>
>
>>>
>>>
>>> Right now only two problems:
>>>
>>> 1) a long running problem that from engine web admin all the volumes are seen
as up and also the storage domains up, while only the hosted engine one is up, while
"data" and vmstore" are down, as I can verify from the host, only one
/rhev/data-center/ mount:
>>>
> [snip]
>>>
>>>
>>> I already reported this, but I don't know if there is yet a bugzilla open
for it.
>>
>> Did you get any response for the original mail? haven't seen it on the
users-list.
>
>
> I think it was this thread related to 4.4.0 released and question about auto-start of
VMs.
> A script from Derek that tested if domains were active and got false positive, and my
comments about the same registered behaviour:
>
https://lists.ovirt.org/archives/list/users@ovirt.org/message/25KYZTFKX5Y...
>
> But I think there was no answer on that particular item/problem.
> Indeed I think you can easily reproduce, I don't know if only with Gluster or
also with other storage domains.
> I don't know if it can have a part the fact that on the last host during a whole
shutdown (and the only host in case of single host) you have to run the script
> /usr/share/glusterfs/scripts/stop-all-gluster-processes.sh
> otherwise you risk not to get a complete shutdown sometimes.
> And perhaps this stop can have an influence on the following startup.
> In any case the web admin gui (and the API access) should not show the domains active
when they are not. I think there is a bug in the code that checks this.
If it got no response so far, I think it could be helpful to file a bug with the details
of the setup and the steps involved here so it will get tracked.
>
>>
>>>
>>> 2) I see that I cannot connect to cockpit console of node.
>>>
> [snip]
>>>
>>> NOTE: the ost is not resolved by DNS but I put an entry in my hosts client.
>>
>> Might be required to set DNS for authenticity, maybe other members on the list
could tell better.
>
>
> It would be the first time I see it. The access to web admin GUI works ok even
without DNS resolution.
> I'm not sure if I had the same problem with the cockpit host console on 4.4.0.
Perhaps +Yedidyah Bar David could help regarding cockpit web access.
>
> Gianluca
>
>
_______________________________________________
Users mailing list -- users(a)ovirt.org
To unsubscribe send an email to users-leave(a)ovirt.org
Privacy Statement:
https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VYWJPRKRESP...