[ANN] oVirt 4.4.6 Fourth Release Candidate is now available for testing
by Sandro Bonazzola
oVirt 4.4.6 Fourth Release Candidate is now available for testing
The oVirt Project is pleased to announce the availability of oVirt 4.4.6
Fourth Release Candidate for testing, as of April 15th, 2021.
This update is the sixth in a series of stabilization updates to the 4.4
series.
Rebase on CentOS Stream
Starting with oVirt 4.4.6 RC4 both oVirt Node and oVirt Engine Appliance
are based on CentOS Stream.
You can still install oVirt 4.4.6 on Red Hat Enterprise Linux, CentOS
Linux or equivalent.
Please note that when 4.4.6 will be released as generally available,
existing oVirt Nodes updating to 4.4.6 will automatically be based on
CentOS Stream.
Help testing this release candidate will help ensuring a smooth experience
for your main installation.
How to prevent hosts entering emergency mode after upgrade from oVirt 4.4.1
Note: Upgrading from 4.4.2 GA or later should not require re-doing these
steps, if already performed while upgrading from 4.4.1 to 4.4.2 GA. These
are only required to be done once.
Due to Bug 1837864 <https://bugzilla.redhat.com/show_bug.cgi?id=1837864> -
Host enter emergency mode after upgrading to latest build
If you have your root file system on a multipath device on your hosts you
should be aware that after upgrading from 4.4.1 to 4.4.6 you may get your
host entering emergency mode.
In order to prevent this be sure to upgrade oVirt Engine first, then on
your hosts:
1.
Remove the current lvm filter while still on 4.4.1, or in emergency mode
(if rebooted).
2.
Reboot.
3.
Upgrade to 4.4.6 (redeploy in case of already being on 4.4.6).
4.
Run vdsm-tool config-lvm-filter to confirm there is a new filter in
place.
5.
Only if not using oVirt Node:
- run "dracut --force --add multipath” to rebuild initramfs with the
correct filter configuration
6.
Reboot.
Documentation
-
If you want to try oVirt as quickly as possible, follow the instructions
on the Download <https://ovirt.org/download/> page.
-
For complete installation, administration, and usage instructions, see
the oVirt Documentation <https://ovirt.org/documentation/>.
-
For upgrading from a previous version, see the oVirt Upgrade Guide
<https://ovirt.org/documentation/upgrade_guide/>.
-
For a general overview of oVirt, see About oVirt
<https://ovirt.org/community/about.html>.
Important notes before you try it
Please note this is a pre-release build.
The oVirt Project makes no guarantees as to its suitability or usefulness.
This pre-release must not be used in production.
Known issues:
-
Bug 1946095 <https://bugzilla.redhat.com/show_bug.cgi?id=1946095> - "No
valid network interface has been found" when starting HE deployment via
cockpit
-
For testing purposes you can use the command line
<https://ovirt.org/documentation/installing_ovirt_as_a_self-hosted_engine_...>
Installation instructions
For installation instructions and additional information please refer to:
https://ovirt.org/documentation/
This release is available now on x86_64 architecture for:
* Red Hat Enterprise Linux 8.3 or newer
* CentOS 8 Stream
* CentOS Linux (or similar) 8.3 or newer
This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
for:
* Red Hat Enterprise Linux 8.3 or newer
* CentOS 8 Stream
* CentOS Linux (or similar) 8.3 or newer
* oVirt Node 4.4 based on CentOS 8 Stream (available for x86_64 only)
See the release notes [1] for installation instructions and a list of new
features and bugs fixed.
Notes:
- oVirt Appliance is already available based on CentOS 8 Stream
- oVirt Node NG is already available based on CentOS 8 Stream
Additional Resources:
* Read more about the oVirt 4.4.6 release highlights:
http://www.ovirt.org/release/4.4.6/
* Get more oVirt project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/
[1] http://www.ovirt.org/release/4.4.6/
[2] http://resources.ovirt.org/pub/ovirt-4.4-pre/iso/
--
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://www.redhat.com/>
*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.*
3 years, 8 months
Re: Upgrade from 4.3.5 to 4.3.10 HE Host issue
by Marko Vrgotic
Hi Didi,
I compared the hosted-engine.conf on all three machines and indeed, host 1 and 3 have identical ones , except hosted.
Hosted-engine.conf on host2 that I am trying to add back contains only hostid and ca path:
ca_cert=/etc/pki/vdsm/libvirt-spice/ca-cert.pem
host_id=2
Can someone help me how to check if there is DB or Storage corruption?
Would it be dectructive or risky to try to populate the hosted-engine.conf of host 2 with missing values?
Any advices?
-----
kind regards/met vriendelijke groeten
Marko Vrgotic
Sr. System Engineer @ System Administration
ActiveVideo
o: +31 (35) 6774131
m: +31 (65) 5734174
e: m.vrgotic(a)activevideo.com<mailto:m.vrgotic@activevideo.com>
w: www.activevideo.com<http://www.activevideo.com>
ActiveVideo Networks BV. Mediacentrum 3745 Joop van den Endeplein 1.1217 WJ Hilversum, The Netherlands. The information contained in this message may be legally privileged and confidential. It is intended to be read only by the individual or entity to whom it is addressed or by their designee. If the reader of this message is not the intended recipient, you are on notice that any distribution of this message, in any form, is strictly prohibited. If you have received this message in error, please immediately notify the sender and/or ActiveVideo Networks, LLC by telephone at +1 408.931.9200 and delete or destroy any copy of this message.
From: Marko Vrgotic <M.Vrgotic(a)activevideo.com>
Date: Wednesday, 14 April 2021 at 16:16
To: Yedidyah Bar David <didi(a)redhat.com>
Cc: users(a)ovirt.org <users(a)ovirt.org>
Subject: Re: [ovirt-users] Re: Upgrade from 4.3.5 to 4.3.10 HE Host issue
Hi Didi,
It looks like the issue was with Hosted-engine Undeploy, being incomplete – the other HE Hosts still had the entries of the Host I was trying to remove, so any following HE Deploy on that Host was failing.
I was able to get the other hosts to forget about this one, by running hosted-engine –clean-metadate –host-id=2
Now I would like to try to add the host back to HE pool, but I have a question: “Is there a time I should wait, between cleaning metadata and re-adding the host?”
Kindly awaiting your reply.
-----
kind regards/met vriendelijke groeten
Marko Vrgotic
Sr. System Engineer @ System Administration
ActiveVideo
o: +31 (35) 6774131
m: +31 (65) 5734174
e: m.vrgotic(a)activevideo.com<mailto:m.vrgotic@activevideo.com>
w: www.activevideo.com<http://www.activevideo.com>
ActiveVideo Networks BV. Mediacentrum 3745 Joop van den Endeplein 1.1217 WJ Hilversum, The Netherlands. The information contained in this message may be legally privileged and confidential. It is intended to be read only by the individual or entity to whom it is addressed or by their designee. If the reader of this message is not the intended recipient, you are on notice that any distribution of this message, in any form, is strictly prohibited. If you have received this message in error, please immediately notify the sender and/or ActiveVideo Networks, LLC by telephone at +1 408.931.9200 and delete or destroy any copy of this message.
From: Yedidyah Bar David <didi(a)redhat.com>
Date: Thursday, 18 March 2021 at 15:09
To: Marko Vrgotic <M.Vrgotic(a)activevideo.com>
Cc: users(a)ovirt.org <users(a)ovirt.org>
Subject: Re: [ovirt-users] Re: Upgrade from 4.3.5 to 4.3.10 HE Host issue
***CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you recognize the sender!!!***
Hi,
On Mon, Mar 8, 2021 at 4:55 PM Marko Vrgotic <M.Vrgotic(a)activevideo.com> wrote:
>
> The broker log, these lines are pretty much repeating:
>
>
>
> MainThread::WARNING::2021-03-03 09:19:12,086::storage_broker::97::ovirt_hosted_engine_ha.broker.storage_broker.StorageBroker::(__init__) Can't connect vdsm storage: 'metadata_image_UUID can't be 'None'
Please compare the content of
/etc/ovirt-hosted-engine/hosted-engine.conf between all your hosts.
host id should be unique per host, but otherwise they should be
identical. If they are not, most likely there is some corruption
somewhere - in the engine db or shared storage.
You might want to skim this for a general rather-low-level overview:
https://nam10.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ovi...
Do you see no errors on your other hosts? In -ha logs?
Please also note that 4.3 is EOL. The deploy process was completely
rewritten in 4.4 (in ansible, previous was python), although should in
principle behave similarly - so if your data is corrupted, upgrade to
4.4 probably won't fix it.
Good luck and best regards,
>
> MainThread::INFO::2021-03-03 09:19:12,829::broker::47::ovirt_hosted_engine_ha.broker.broker.Broker::(run) ovirt-hosted-engine-ha broker 2.3.6 started
>
> MainThread::INFO::2021-03-03 09:19:12,829::monitor::40::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors) Searching for submonitors in /usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/sub
>
> monitors
>
> MainThread::INFO::2021-03-03 09:19:12,829::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors) Loaded submonitor cpu-load
>
> MainThread::INFO::2021-03-03 09:19:12,832::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors) Loaded submonitor cpu-load-no-engine
>
> MainThread::INFO::2021-03-03 09:19:12,832::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors) Loaded submonitor engine-health
>
> MainThread::INFO::2021-03-03 09:19:12,832::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors) Loaded submonitor mem-free
>
> MainThread::INFO::2021-03-03 09:19:12,833::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors) Loaded submonitor mgmt-bridge
>
> MainThread::INFO::2021-03-03 09:19:12,833::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors) Loaded submonitor network
>
> MainThread::INFO::2021-03-03 09:19:12,833::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors) Loaded submonitor storage-domain
>
> MainThread::INFO::2021-03-03 09:19:12,833::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors) Loaded submonitor cpu-load
>
> MainThread::INFO::2021-03-03 09:19:12,834::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors) Loaded submonitor cpu-load-no-engine
>
> MainThread::INFO::2021-03-03 09:19:12,835::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors) Loaded submonitor engine-health
>
> MainThread::INFO::2021-03-03 09:19:12,835::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors) Loaded submonitor mem-free
>
> MainThread::INFO::2021-03-03 09:19:12,835::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors) Loaded submonitor mgmt-bridge
>
> MainThread::INFO::2021-03-03 09:19:12,835::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors) Loaded submonitor network
>
> MainThread::INFO::2021-03-03 09:19:12,836::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors) Loaded submonitor storage-domain
>
> MainThread::INFO::2021-03-03 09:19:12,836::monitor::50::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors) Finished loading submonitors
>
> MainThread::WARNING::2021-03-03 09:19:12,836::storage_broker::97::ovirt_hosted_engine_ha.broker.storage_broker.StorageBroker::(__init__) Can't connect vdsm storage: 'metadata_image_UUID can't be 'None'
>
> MainThread::INFO::2021-03-03 09:19:13,574::broker::47::ovirt_hosted_engine_ha.broker.broker.Broker::(run) ovirt-hosted-engine-ha broker 2.3.6 started
>
> MainThread::INFO::2021-03-03 09:19:13,575::monitor::40::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors) Searching for submonitors in /usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/submonitors
>
> MainThread::INFO::2021-03-03 09:19:13,575::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors) Loaded submonitor cpu-load
>
> MainThread::INFO::2021-03-03 09:19:13,577::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors) Loaded submonitor cpu-load-no-engine
>
> MainThread::INFO::2021-03-03 09:19:13,578::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors) Loaded submonitor engine-health
>
>
>
>
>
>
>
> -----
>
> kind regards/met vriendelijke groeten
>
>
>
> Marko Vrgotic
> Sr. System Engineer @ System Administration
>
>
> ActiveVideo
>
> o: +31 (35) 6774131
>
> m: +31 (65) 5734174
>
> e: m.vrgotic(a)activevideo.com
> w: https://nam10.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.acti...
>
>
>
> ActiveVideo Networks BV. Mediacentrum 3745 Joop van den Endeplein 1.1217 WJ Hilversum, The Netherlands. The information contained in this message may be legally privileged and confidential. It is intended to be read only by the individual or entity to whom it is addressed or by their designee. If the reader of this message is not the intended recipient, you are on notice that any distribution of this message, in any form, is strictly prohibited. If you have received this message in error, please immediately notify the sender and/or ActiveVideo Networks, LLC by telephone at +1 408.931.9200 and delete or destroy any copy of this message.
>
>
>
>
>
>
>
> From: Marko Vrgotic <M.Vrgotic(a)activevideo.com>
> Date: Monday, 8 March 2021 at 15:34
> To: Yedidyah Bar David <didi(a)redhat.com>
> Cc: users(a)ovirt.org <users(a)ovirt.org>
> Subject: Re: [ovirt-users] Re: Upgrade from 4.3.5 to 4.3.10 HE Host issue
>
> Hi Didi,
>
>
>
> Please find the attached logs from Host and Engine.
>
>
>
> Host ovirt-sj-02 HE Undeploy 2021-03-08 14:15:52 till 2021-03-08 14:18:24
>
>
>
>
>
> Host ovirt-sj-02 HE Deploy 2021-03-08 14:20:51 till 2021-03-08 14:23:22
>
>
>
> I do see errors in the agent and broker and vdsm, but I do not see why it happened.
>
>
>
> Thank you for helping, let me know if any additional files are needed.
>
>
>
>
>
> -----
>
> kind regards/met vriendelijke groeten
>
>
>
> Marko Vrgotic
> Sr. System Engineer @ System Administration
>
>
> ActiveVideo
>
> o: +31 (35) 6774131
>
> m: +31 (65) 5734174
>
> e: m.vrgotic(a)activevideo.com
> w: https://nam10.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.acti...
>
>
>
> ActiveVideo Networks BV. Mediacentrum 3745 Joop van den Endeplein 1.1217 WJ Hilversum, The Netherlands. The information contained in this message may be legally privileged and confidential. It is intended to be read only by the individual or entity to whom it is addressed or by their designee. If the reader of this message is not the intended recipient, you are on notice that any distribution of this message, in any form, is strictly prohibited. If you have received this message in error, please immediately notify the sender and/or ActiveVideo Networks, LLC by telephone at +1 408.931.9200 and delete or destroy any copy of this message.
>
>
>
>
>
>
>
> From: Yedidyah Bar David <didi(a)redhat.com>
> Date: Monday, 8 March 2021 at 09:25
> To: Marko Vrgotic <M.Vrgotic(a)activevideo.com>
> Cc: users(a)ovirt.org <users(a)ovirt.org>
> Subject: Re: [ovirt-users] Re: Upgrade from 4.3.5 to 4.3.10 HE Host issue
>
> ***CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you recognize the sender!!!***
>
> Hi,
>
> On Mon, Mar 8, 2021 at 10:13 AM Marko Vrgotic <M.Vrgotic(a)activevideo.com> wrote:
> >
> > I cannot find the reason why the re-Deployment on this Hosts fails, as it was already deployed on it before.
> >
> > No errors, found int the deployment, but it seems half done, based on messages I sent in previous email.
>
> Please check/share all relevant logs. Thanks. Can be all of /var/log
> from engine and hosts, and at least:
>
> /var/log/ovirt-engine/engine.log
>
> /var/log/vdsm/*
>
> /var/log/ovirt-hosted-engine-ha/*
>
> Best regards,
> --
> Didi
--
Didi
3 years, 8 months
Failed certificate expiration
by Fabrice Bacchella
I missed the certificate expiration for the oVirt PKI.
So the engine is now totally unable to talk to the hosts. Is there any documentation for this kind of failure recovery ?
3 years, 8 months
hosted-engine volume removed 3 bricks (replica 3) out of 12 bricks, now I cant start hosted-engine vm
by adrianquintero@gmail.com
Hi,
I tried removing a replica 3 brick from a distributed replicated volume which holds the ovirt hosted-engine VM.
As soon as I hit commit the VM went into pause, I tried to recover the volume ID "daa292aa-be5c-426e-b124-64263bf8a3ee" from the remvoed bricks and now I am able to do a "hosted-engine --vm-status
Error I see in the logs:
---------------------------------
MainThread::WARNING::2021-04-14 17:26:12,348::storage_broker::97::ovirt_hosted_engine_ha.broker.storage_broker.StorageBroker::(__init__) Can't connect vdsm storage: Command Image.prepare with args {'imageID': '9feffcfa-6af2-4de3-b7d8-e57b84d56003', 'storagepoolID': '00000000-0000-0000-0000-000000000000', 'volumeID': 'daa292aa-be5c-426e-b124-64263bf8a3ee', 'storagedomainID': '8db17b28-ecbb-4853-8a90-6ed2f69301eb'} failed:
(code=201, message=Volume does not exist: (u'daa292aa-be5c-426e-b124-64263bf8a3ee',))
---------------------------------
on the following mount I see the volumeID twice:
------------------------
[root@vmm10 images]# find /rhev/data-center/mnt/glusterSD/192.168.0.4\:_engine/ -name "daa292aa-be5c-426e-b124-64263bf8a3ee"
/rhev/data-center/mnt/glusterSD/192.168.0.4:_engine/8db17b28-ecbb-4853-8a90-6ed2f69301eb/images/9feffcfa-6af2-4de3-b7d8-e57b84d56003/daa292aa-be5c-426e-b124-64263bf8a3ee
/rhev/data-center/mnt/glusterSD/192.168.0.4:_engine/8db17b28-ecbb-4853-8a90-6ed2f69301eb/images/9feffcfa-6af2-4de3-b7d8-e57b84d56003/daa292aa-be5c-426e-b124-64263bf8a3ee
[root@vmm10 9feffcfa-6af2-4de3-b7d8-e57b84d56003]# ls -lh
total 131M
-rw-rw----. 1 vdsm kvm 64M Apr 14 19:40 daa292aa-be5c-426e-b124-64263bf8a3ee
-rw-rw----. 1 vdsm kvm 64M Apr 14 19:40 daa292aa-be5c-426e-b124-64263bf8a3ee
-rw-rw----. 1 vdsm kvm 1.0M Jul 1 2020 daa292aa-be5c-426e-b124-64263bf8a3ee.lease
-rw-rw----. 1 vdsm kvm 1.0M Jul 1 2020 daa292aa-be5c-426e-b124-64263bf8a3ee.lease
-rw-r--r--. 1 vdsm kvm 329 Jul 1 2020 daa292aa-be5c-426e-b124-64263bf8a3ee.meta
-rw-r--r--. 1 vdsm kvm 329 Jul 1 2020 daa292aa-be5c-426e-b124-64263bf8a3ee.meta
Any ideas on how to recover ?
3 years, 8 months
Can't Create VM from Template in VM portal
by Andrey Rusakov
We are trying to orginize self manage VM portal for our customers.
And there is VM portal in oVirt.
What did we do - Create user, Crate group, create quota, create Vm template.
All permissions by default + VmCreator (to allow user create disks from template).
Problem
Can't Create VM from Template - we got "No Disks Defined" on "Storage" page (create new VM)
Posible solution - Grant "Disk Operator" Role, But in this case, All users can see All VMs.
Is it posible to limit users to see VMs in a users group and permissions to create new VMs from template?
3 years, 8 months
Ovirt node - 4.4.4 -> 4.4.5 upgrade failed - do i need a reinstall ? tried removing /dev/onn/ovirt-node-ng-4.4.5.1-0.20210323.0+1 to reinstall rpm..
by morgan cox
Hi.
I have a 2 node cluster (both installed from the Ovirt-node iso) , 1 server is 100% fine, but one server I tried to update when there were issues with the ovirtmgmt network (I wasn't aware of at the time) - as a result the server update failed, and was still on ovirt-node 4.4.4 after - I could see the LVM layer for 4.4.5 though.
The network is fine now (hence the other server being ok) - I tried reinstalling the package ovirt-node-ng-image-update-4.4.5.1-1.el8.noarch.rpm but it failed due to LVM lv for 4.4.5 already existing ...
So I tried removing the LV - i.e
# lvremove /dev/onn/ovirt-node-ng-4.4.5.1-0.20210323.0+1
however now I'm sure I have less 4.4.4 lvs ..
i.e
------
[root@ng2-ovirt-kvm3 mcox]# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
home onn Vwi-aotz-- 1.00g pool00 10.05
ovirt-node-ng-4.4.4.1-0.20210208.0+1 onn Vwi-aotz-- 191.00g pool00 2.27
pool00 onn twi-aotz-- 218.00g 2.48 1.92
swap onn -wi-ao---- 4.00g
tmp onn Vwi-aotz-- 1.00g pool00 2.24
var onn Vwi-aotz-- 15.00g pool00 2.80
var_crash onn Vwi-aotz-- 10.00g pool00 0.21
var_log onn Vwi-aotz-- 8.00g pool00 5.97
var_log_audit onn Vwi-aotz-- 2.00g pool00 1.14
----
Where as on the working server I have these Ovirt Node LVs
ovirt-node-ng-4.4.4.1-0.20210208.0 onn Vwi---tz-k 191.00g pool00 root
ovirt-node-ng-4.4.4.1-0.20210208.0+1 onn Vwi-a-tz-- 191.00g pool00 ovirt-node-ng-4.4.4.1-0.20210208.0 1.86
ovirt-node-ng-4.4.5.1-0.20210323.0 onn Vri---tz-k 191.00g pool00
ovirt-node-ng-4.4.5.1-0.20210323.0+1 onn Vwi-aotz-- 191.00g pool00 ovirt-node-ng-4.4.5.1-0.20210323.0 1.58
Anyway I tried to reinstall the ovirt-node-ng-image-update-4.4.5.1-1.el8.noarch.rpm package after removal of lv and files in /boot
But install still fails .. Is there anything I can do to update this node - or should I bite the bullet and reinstall ?
this is from /var/log/imgbased.log ->
--------------------------------------
2021-04-14 16:05:56,634 [INFO] (MainThread) Adding a new layer after <Base ovirt-node-ng-4.4.5.1-0.20210323.0 [] />
2021-04-14 16:05:56,634 [INFO] (MainThread) Adding a new layer after <Base ovirt-node-ng-4.4.5.1-0.20210323.0 [] />
2021-04-14 16:05:56,634 [DEBUG] (MainThread) Basing new layer on previous: <Base ovirt-node-ng-4.4.5.1-0.20210323.0 [] />
2021-04-14 16:05:56,634 [DEBUG] (MainThread) Finding next layer based on <Base ovirt-node-ng-4.4.5.1-0.20210323.0 [] />
2021-04-14 16:05:56,634 [DEBUG] (MainThread) Suggesting for layer for base ovirt-node-ng-4.4.5.1-0.20210323.0
2021-04-14 16:05:56,635 [DEBUG] (MainThread) ... without layers
2021-04-14 16:05:56,635 [INFO] (MainThread) New layer will be: <Layer ovirt-node-ng-4.4.5.1-0.20210323.0+1 />
2021-04-14 16:05:56,635 [DEBUG] (MainThread) Calling: (['vgs', '--noheadings', '--ignoreskippedcluster', '--select', 'vg_tags = imgbased:vg', '-o', 'vg_name'],) {'stderr': <_io.TextIOWrapper name='/dev/null' mode='w' encoding='UTF-8'>, 'close_fds': True}
2021-04-14 16:05:56,697 [DEBUG] (MainThread) Returned: b'onn'
2021-04-14 16:05:56,698 [DEBUG] (MainThread) Calling: (['vgs', '--noheadings', '--ignoreskippedcluster', '@imgbased:pool', '-o', 'lv_full_name'],) {'stderr': <_io.TextIOWrapper name='/dev/null' mode='w' encoding='UTF-8'>, 'close_fds': True}
2021-04-14 16:05:56,757 [DEBUG] (MainThread) Returned: b'onn/pool00'
2021-04-14 16:05:56,758 [DEBUG] (MainThread) Calling: (['lvs', '--noheadings', '--ignoreskippedcluster', '--nosuffix', '--units', 'm', '-o', 'metadata_percent,lv_metadata_size', 'onn/pool00'],) {'stderr': <_io.TextIOWrapper name='/dev/null' mode='w' encoding='UTF-8'>, 'close_fds': True}
2021-04-14 16:05:56,829 [DEBUG] (MainThread) Returned: b'2.05 1024.00'
2021-04-14 16:05:56,830 [DEBUG] (MainThread) Pool: onn/pool00, metadata size=1024.0M (2.05%)
2021-04-14 16:05:56,830 [DEBUG] (MainThread) Calling: (['lvchange', '--activate', 'y', 'onn/ovirt-node-ng-4.4.5.1-0.20210323.0', '--ignoreactivationskip'],) {'stderr': <_io.TextIOWrapper name='/dev/null' mode='w' encoding='UTF-8'>, 'close_fds': True}
2021-04-14 16:05:56,953 [DEBUG] (MainThread) Calling: (['lvcreate', '--snapshot', '--name', 'ovirt-node-ng-4.4.5.1-0.20210323.0+1', 'onn/ovirt-node-ng-4.4.5.1-0.20210323.0'],) {'close_fds': True, 'stderr': -2}
2021-04-14 16:05:57,111 [DEBUG] (MainThread) Returned: b'WARNING: Sum of all thin volume sizes (610.00 GiB) exceeds the size of thin pool onn/pool00 and the size of whole volume group (<277.78 GiB).\n Logical volume "ovirt-node-ng-4.4.5.1-0.20210323.0+1" created.'
2021-04-14 16:05:57,112 [DEBUG] (MainThread) Calling: (['lvchange', '--activate', 'y', 'onn/ovirt-node-ng-4.4.5.1-0.20210323.0+1', '--ignoreactivationskip'],) {'stderr': <_io.TextIOWrapper name='/dev/null' mode='w' encoding='UTF-8'>, 'close_fds': True}
2021-04-14 16:05:57,231 [DEBUG] (MainThread) Calling: (['lvchange', '--addtag', 'imgbased:layer', 'onn/ovirt-node-ng-4.4.5.1-0.20210323.0+1'],) {'stderr': <_io.TextIOWrapper name='/dev/null' mode='w' encoding='UTF-8'>, 'close_fds': True}
2021-04-14 16:05:57,310 [DEBUG] (MainThread) Returned: b'Logical volume onn/ovirt-node-ng-4.4.5.1-0.20210323.0+1 changed.'
2021-04-14 16:05:57,311 [DEBUG] (MainThread) Calling: (['lvs', '--noheadings', '--ignoreskippedcluster', '-olv_path', 'onn/ovirt-node-ng-4.4.5.1-0.20210323.0+1'],) {'stderr': <_io.TextIOWrapper name='/dev/null' mode='w' encoding='UTF-8'>, 'close_fds': True}
2021-04-14 16:05:57,383 [DEBUG] (MainThread) Returned: b'/dev/onn/ovirt-node-ng-4.4.5.1-0.20210323.0+1'
2021-04-14 16:05:57,400 [DEBUG] (MainThread) Calling: (['mktemp', '-d', '--tmpdir', 'mnt.XXXXX'],) {'close_fds': True, 'stderr': -2}
2021-04-14 16:05:57,404 [DEBUG] (MainThread) Returned: b'/tmp/mnt.aGJ3A'
2021-04-14 16:05:57,404 [DEBUG] (MainThread) Calling: (['mount', '-onouuid', '/dev/onn/ovirt-node-ng-4.4.5.1-0.20210323.0+1', '/tmp/mnt.aGJ3A'],) {'close_fds': True, 'stderr': -2}
2021-04-14 16:05:57,537 [DEBUG] (MainThread) Calling: (['umount', '-l', '/tmp/mnt.aGJ3A'],) {'close_fds': True, 'stderr': -2}
2021-04-14 16:05:57,561 [DEBUG] (MainThread) Calling: (['rmdir', '/tmp/mnt.aGJ3A'],) {'close_fds': True, 'stderr': -2}
2021-04-14 16:05:57,565 [DEBUG] (MainThread) Running: ['xfs_admin', '-U', 'generate', '/dev/onn/ovirt-node-ng-4.4.5.1-0.20210323.0+1']
2021-04-14 16:05:57,566 [DEBUG] (MainThread) Calling: (['xfs_admin', '-U', 'generate', '/dev/onn/ovirt-node-ng-4.4.5.1-0.20210323.0+1'],) {'stderr': -2, 'close_fds': True}
2021-04-14 16:05:57,957 [DEBUG] (MainThread) Calling: (['lvchange', '--setactivationskip', 'n', 'onn/ovirt-node-ng-4.4.5.1-0.20210323.0+1'],) {'stderr': <_io.TextIOWrapper name='/dev/null' mode='w' encoding='UTF-8'>, 'close_fds': True}
2021-04-14 16:05:58,040 [DEBUG] (MainThread) Returned: b'Logical volume onn/ovirt-node-ng-4.4.5.1-0.20210323.0+1 changed.'
2021-04-14 16:05:58,041 [DEBUG] (MainThread) Calling: (['lvchange', '--setactivationskip', 'y', 'onn/ovirt-node-ng-4.4.5.1-0.20210323.0'],) {'stderr': <_io.TextIOWrapper name='/dev/null' mode='w' encoding='UTF-8'>, 'close_fds': True}
2021-04-14 16:05:58,116 [DEBUG] (MainThread) Returned: b'Logical volume onn/ovirt-node-ng-4.4.5.1-0.20210323.0 changed.'
2021-04-14 16:05:58,117 [DEBUG] (MainThread) Got: <LV 'onn/ovirt-node-ng-4.4.5.1-0.20210323.0+1' /> and <LV 'onn/ovirt-node-ng-4.4.5.1-0.20210323.0' />
2021-04-14 16:05:58,117 [DEBUG] (MainThread) Calling: (['mount', '--make-rprivate', '/var'],) {'close_fds': True, 'stderr': -2}
2021-04-14 16:05:58,122 [DEBUG] (MainThread) Calling: (['systemctl', 'is-active', 'vdsmd'],) {'close_fds': True, 'stderr': -2}
2021-04-14 16:05:58,141 [DEBUG] (MainThread) Returned: b'active'
2021-04-14 16:05:58,141 [DEBUG] (MainThread) Calling: (['systemctl', 'stop', 'vdsmd'],) {'close_fds': True, 'stderr': -2}
2021-04-14 16:06:08,940 [DEBUG] (MainThread) Calling: (['systemctl', 'stop', 'supervdsmd'],) {'close_fds': True, 'stderr': -2}
2021-04-14 16:06:09,072 [DEBUG] (MainThread) Calling: (['systemctl', 'stop', 'vdsm-network'],) {'close_fds': True, 'stderr': -2}
2021-04-14 16:06:09,090 [DEBUG] (MainThread) Calling: (['vgchange', '-ay', '--select', 'vg_tags = imgbased:vg'],) {'stderr': <_io.TextIOWrapper name='/dev/null' mode='w' encoding='UTF-8'>, 'close_fds': True}
2021-04-14 16:06:09,167 [DEBUG] (MainThread) Returned: b'11 logical volume(s) in volume group "onn" now active'
2021-04-14 16:06:09,168 [DEBUG] (MainThread) Calling: (['lvs', '--noheadings', '--ignoreskippedcluster', '-o', 'lv_full_name', '--select', 'lv_tags = imgbased:base || lv_tags = imgbased:layer'],) {'stderr': <_io.TextIOWrapper name='/dev/null' mode='w' encoding='UTF-8'>, 'close_fds': True}
2021-04-14 16:06:09,229 [DEBUG] (MainThread) Returned: b'onn/ovirt-node-ng-4.4.4.1-0.20210208.0+1\n onn/ovirt-node-ng-4.4.5.1-0.20210323.0 \n onn/ovirt-node-ng-4.4.5.1-0.20210323.0+1'
2021-04-14 16:06:09,230 [DEBUG] (MainThread) All LV names: ['onn/ovirt-node-ng-4.4.4.1-0.20210208.0+1', 'onn/ovirt-node-ng-4.4.5.1-0.20210323.0', 'onn/ovirt-node-ng-4.4.5.1-0.20210323.0+1']
2021-04-14 16:06:09,230 [DEBUG] (MainThread) All LVS: [<LV 'onn/ovirt-node-ng-4.4.4.1-0.20210208.0+1' />, <LV 'onn/ovirt-node-ng-4.4.5.1-0.20210323.0' />, <LV 'onn/ovirt-node-ng-4.4.5.1-0.20210323.0+1' />]
2021-04-14 16:06:09,230 [DEBUG] (MainThread) Calling: (['lvs', '--noheadings', '--ignoreskippedcluster', '-olv_tags', 'onn/ovirt-node-ng-4.4.4.1-0.20210208.0+1'],) {'stderr': <_io.TextIOWrapper name='/dev/null' mode='w' encoding='UTF-8'>, 'close_fds': True}
2021-04-14 16:06:09,293 [DEBUG] (MainThread) Returned: b'imgbased:layer'
2021-04-14 16:06:09,294 [DEBUG] (MainThread) Calling: (['lvs', '--noheadings', '--ignoreskippedcluster', '-olv_tags', 'onn/ovirt-node-ng-4.4.4.1-0.20210208.0+1'],) {'stderr': <_io.TextIOWrapper name='/dev/null' mode='w' encoding='UTF-8'>, 'close_fds': True}
2021-04-14 16:06:09,357 [DEBUG] (MainThread) Returned: b'imgbased:layer'
2021-04-14 16:06:09,358 [DEBUG] (MainThread) Calling: (['lvs', '--noheadings', '--ignoreskippedcluster', '-olv_tags', 'onn/ovirt-node-ng-4.4.5.1-0.20210323.0'],) {'stderr': <_io.TextIOWrapper name='/dev/null' mode='w' encoding='UTF-8'>, 'close_fds': True}
2021-04-14 16:06:09,417 [DEBUG] (MainThread) Returned: b'imgbased:base'
2021-04-14 16:06:09,418 [DEBUG] (MainThread) Calling: (['lvs', '--noheadings', '--ignoreskippedcluster', '-olv_tags', 'onn/ovirt-node-ng-4.4.5.1-0.20210323.0+1'],) {'stderr': <_io.TextIOWrapper name='/dev/null' mode='w' encoding='UTF-8'>, 'close_fds': True}
2021-04-14 16:06:09,490 [DEBUG] (MainThread) Returned: b'imgbased:layer'
2021-04-14 16:06:09,491 [DEBUG] (MainThread) Calling: (['lvs', '--noheadings', '--ignoreskippedcluster', '-olv_tags', 'onn/ovirt-node-ng-4.4.5.1-0.20210323.0+1'],) {'stderr': <_io.TextIOWrapper name='/dev/null' mode='w' encoding='UTF-8'>, 'close_fds': True}
2021-04-14 16:06:09,557 [DEBUG] (MainThread) Returned: b'imgbased:layer'
2021-04-14 16:06:09,558 [DEBUG] (MainThread) Our LVS: [<LV 'onn/ovirt-node-ng-4.4.4.1-0.20210208.0+1' />, <LV 'onn/ovirt-node-ng-4.4.5.1-0.20210323.0' />, <LV 'onn/ovirt-node-ng-4.4.5.1-0.20210323.0+1' />]
2021-04-14 16:06:09,558 [DEBUG] (MainThread) Names: ['ovirt-node-ng-4.4.4.1-0.20210208.0+1', 'ovirt-node-ng-4.4.5.1-0.20210323.0', 'ovirt-node-ng-4.4.5.1-0.20210323.0+1']
2021-04-14 16:06:09,558 [DEBUG] (MainThread) Images: [<Layer ovirt-node-ng-4.4.4.1-0.20210208.0+1 />, <Base ovirt-node-ng-4.4.5.1-0.20210323.0 [] />, <Layer ovirt-node-ng-4.4.5.1-0.20210323.0+1 />]
2021-04-14 16:06:09,558 [DEBUG] (MainThread) Calling: (['umount', '-l', '/tmp/mnt.o5bFp'],) {'close_fds': True, 'stderr': -2}
2021-04-14 16:06:09,579 [DEBUG] (MainThread) Calling: (['rmdir', '/tmp/mnt.o5bFp'],) {'close_fds': True, 'stderr': -2}
2021-04-14 16:06:09,583 [DEBUG] (MainThread) Calling: (['umount', '-l', '/tmp/mnt.CcHrE'],) {'close_fds': True, 'stderr': -2}
2021-04-14 16:06:09,601 [DEBUG] (MainThread) Calling: (['rmdir', '/tmp/mnt.CcHrE'],) {'close_fds': True, 'stderr': -2}
2021-04-14 16:06:09,605 [ERROR] (MainThread) Update failed, resetting registered LVs
2021-04-14 16:06:09,606 [DEBUG] (MainThread) Calling: (['sync'],) {'close_fds': True, 'stderr': -2}
2021-04-14 16:06:09,615 [DEBUG] (MainThread) Calling: (['lvs', '--noheadings', '--ignoreskippedcluster', '-olv_dm_path', 'onn/ovirt-node-ng-4.4.5.1-0.20210323.0'],) {'stderr': <_io.TextIOWrapper name='/dev/null' mode='w' encoding='UTF-8'>, 'close_fds': True}
2021-04-14 16:06:09,683 [DEBUG] (MainThread) Returned: b'/dev/mapper/onn-ovirt--node--ng--4.4.5.1--0.20210323.0'
2021-04-14 16:06:09,684 [DEBUG] (MainThread) Calling: (['lvremove', '-ff', 'onn/ovirt-node-ng-4.4.5.1-0.20210323.0'],) {'stderr': <_io.TextIOWrapper name='/dev/null' mode='w' encoding='UTF-8'>, 'close_fds': True}
2021-04-14 16:06:09,820 [DEBUG] (MainThread) Returned: b'Logical volume "ovirt-node-ng-4.4.5.1-0.20210323.0" successfully removed'
2021-04-14 16:06:09,821 [DEBUG] (MainThread) Calling: (['lvs', '--noheadings', '--ignoreskippedcluster', '-olv_dm_path', 'onn/ovirt-node-ng-4.4.5.1-0.20210323.0+1'],) {'stderr': <_io.TextIOWrapper name='/dev/null' mode='w' encoding='UTF-8'>, 'close_fds': True}
2021-04-14 16:06:09,886 [DEBUG] (MainThread) Returned: b'/dev/mapper/onn-ovirt--node--ng--4.4.5.1--0.20210323.0+1'
2021-04-14 16:06:09,887 [DEBUG] (MainThread) Calling: (['lvremove', '-ff', 'onn/ovirt-node-ng-4.4.5.1-0.20210323.0+1'],) {'stderr': <_io.TextIOWrapper name='/dev/null' mode='w' encoding='UTF-8'>, 'close_fds': True}
2021-04-14 16:06:10,207 [DEBUG] (MainThread) Returned: b'Logical volume "ovirt-node-ng-4.4.5.1-0.20210323.0+1" successfully removed'
Traceback (most recent call last):
File "/usr/lib64/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/lib64/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/tmp/tmp.cyXrOTxRgb/usr/lib/python3.6/site-packages/imgbased/__main__.py", line 53, in <module>
CliApplication()
File "/tmp/tmp.cyXrOTxRgb/usr/lib/python3.6/site-packages/imgbased/__init__.py", line 82, in CliApplication
app.hooks.emit("post-arg-parse", args)
File "/tmp/tmp.cyXrOTxRgb/usr/lib/python3.6/site-packages/imgbased/hooks.py", line 120, in emit
cb(self.context, *args)
File "/tmp/tmp.cyXrOTxRgb/usr/lib/python3.6/site-packages/imgbased/plugins/update.py", line 75, in post_argparse
six.reraise(*exc_info)
File "/tmp/tmp.cyXrOTxRgb/usr/lib/python3.6/site-packages/six.py", line 703, in reraise
raise value
File "/tmp/tmp.cyXrOTxRgb/usr/lib/python3.6/site-packages/imgbased/plugins/update.py", line 66, in post_argparse
base, _ = LiveimgExtractor(app.imgbase).extract(args.FILENAME)
File "/tmp/tmp.cyXrOTxRgb/usr/lib/python3.6/site-packages/imgbased/plugins/update.py", line 148, in extract
"%s" % size, nvr)
File "/tmp/tmp.cyXrOTxRgb/usr/lib/python3.6/site-packages/imgbased/plugins/update.py", line 128, in add_base_with_tree
new_layer_lv = self.imgbase.add_layer(new_base)
File "/tmp/tmp.cyXrOTxRgb/usr/lib/python3.6/site-packages/imgbased/imgbase.py", line 213, in add_layer
self.hooks.emit("new-layer-added", prev_lv, new_lv)
File "/tmp/tmp.cyXrOTxRgb/usr/lib/python3.6/site-packages/imgbased/hooks.py", line 120, in emit
cb(self.context, *args)
File "/tmp/tmp.cyXrOTxRgb/usr/lib/python3.6/site-packages/imgbased/plugins/osupdater.py", line 88, in on_new_layer
previous_layer_lv = get_prev_layer_lv(imgbase, new_lv)
File "/tmp/tmp.cyXrOTxRgb/usr/lib/python3.6/site-packages/imgbased/plugins/osupdater.py", line 143, in get_prev_layer_lv
layer_before = imgbase.naming.layer_before(new_layer)
File "/tmp/tmp.cyXrOTxRgb/usr/lib/python3.6/site-packages/imgbased/naming.py", line 74, in layer_before
layers = self.layers()
File "/tmp/tmp.cyXrOTxRgb/usr/lib/python3.6/site-packages/imgbased/naming.py", line 58, in layers
for b in self.tree():
File "/tmp/tmp.cyXrOTxRgb/usr/lib/python3.6/site-packages/imgbased/naming.py", line 226, in tree
bases[img.base.nvr].layers.append(img)
KeyError: <NVR ovirt-node-ng-4.4.4.1-0.20210208.0 />
--------------------
3 years, 8 months
Re: Upgrade from 4.3.5 to 4.3.10 HE Host issue
by Yedidyah Bar David
Hi,
On Mon, Mar 8, 2021 at 4:55 PM Marko Vrgotic <M.Vrgotic(a)activevideo.com> wrote:
>
> The broker log, these lines are pretty much repeating:
>
>
>
> MainThread::WARNING::2021-03-03 09:19:12,086::storage_broker::97::ovirt_hosted_engine_ha.broker.storage_broker.StorageBroker::(__init__) Can't connect vdsm storage: 'metadata_image_UUID can't be 'None'
Please compare the content of
/etc/ovirt-hosted-engine/hosted-engine.conf between all your hosts.
host id should be unique per host, but otherwise they should be
identical. If they are not, most likely there is some corruption
somewhere - in the engine db or shared storage.
You might want to skim this for a general rather-low-level overview:
https://www.ovirt.org/images/Hosted-Engine-4.3-deep-dive.pdf
Do you see no errors on your other hosts? In -ha logs?
Please also note that 4.3 is EOL. The deploy process was completely
rewritten in 4.4 (in ansible, previous was python), although should in
principle behave similarly - so if your data is corrupted, upgrade to
4.4 probably won't fix it.
Good luck and best regards,
>
> MainThread::INFO::2021-03-03 09:19:12,829::broker::47::ovirt_hosted_engine_ha.broker.broker.Broker::(run) ovirt-hosted-engine-ha broker 2.3.6 started
>
> MainThread::INFO::2021-03-03 09:19:12,829::monitor::40::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors) Searching for submonitors in /usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/sub
>
> monitors
>
> MainThread::INFO::2021-03-03 09:19:12,829::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors) Loaded submonitor cpu-load
>
> MainThread::INFO::2021-03-03 09:19:12,832::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors) Loaded submonitor cpu-load-no-engine
>
> MainThread::INFO::2021-03-03 09:19:12,832::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors) Loaded submonitor engine-health
>
> MainThread::INFO::2021-03-03 09:19:12,832::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors) Loaded submonitor mem-free
>
> MainThread::INFO::2021-03-03 09:19:12,833::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors) Loaded submonitor mgmt-bridge
>
> MainThread::INFO::2021-03-03 09:19:12,833::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors) Loaded submonitor network
>
> MainThread::INFO::2021-03-03 09:19:12,833::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors) Loaded submonitor storage-domain
>
> MainThread::INFO::2021-03-03 09:19:12,833::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors) Loaded submonitor cpu-load
>
> MainThread::INFO::2021-03-03 09:19:12,834::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors) Loaded submonitor cpu-load-no-engine
>
> MainThread::INFO::2021-03-03 09:19:12,835::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors) Loaded submonitor engine-health
>
> MainThread::INFO::2021-03-03 09:19:12,835::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors) Loaded submonitor mem-free
>
> MainThread::INFO::2021-03-03 09:19:12,835::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors) Loaded submonitor mgmt-bridge
>
> MainThread::INFO::2021-03-03 09:19:12,835::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors) Loaded submonitor network
>
> MainThread::INFO::2021-03-03 09:19:12,836::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors) Loaded submonitor storage-domain
>
> MainThread::INFO::2021-03-03 09:19:12,836::monitor::50::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors) Finished loading submonitors
>
> MainThread::WARNING::2021-03-03 09:19:12,836::storage_broker::97::ovirt_hosted_engine_ha.broker.storage_broker.StorageBroker::(__init__) Can't connect vdsm storage: 'metadata_image_UUID can't be 'None'
>
> MainThread::INFO::2021-03-03 09:19:13,574::broker::47::ovirt_hosted_engine_ha.broker.broker.Broker::(run) ovirt-hosted-engine-ha broker 2.3.6 started
>
> MainThread::INFO::2021-03-03 09:19:13,575::monitor::40::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors) Searching for submonitors in /usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/submonitors
>
> MainThread::INFO::2021-03-03 09:19:13,575::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors) Loaded submonitor cpu-load
>
> MainThread::INFO::2021-03-03 09:19:13,577::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors) Loaded submonitor cpu-load-no-engine
>
> MainThread::INFO::2021-03-03 09:19:13,578::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors) Loaded submonitor engine-health
>
>
>
>
>
>
>
> -----
>
> kind regards/met vriendelijke groeten
>
>
>
> Marko Vrgotic
> Sr. System Engineer @ System Administration
>
>
> ActiveVideo
>
> o: +31 (35) 6774131
>
> m: +31 (65) 5734174
>
> e: m.vrgotic(a)activevideo.com
> w: www.activevideo.com
>
>
>
> ActiveVideo Networks BV. Mediacentrum 3745 Joop van den Endeplein 1.1217 WJ Hilversum, The Netherlands. The information contained in this message may be legally privileged and confidential. It is intended to be read only by the individual or entity to whom it is addressed or by their designee. If the reader of this message is not the intended recipient, you are on notice that any distribution of this message, in any form, is strictly prohibited. If you have received this message in error, please immediately notify the sender and/or ActiveVideo Networks, LLC by telephone at +1 408.931.9200 and delete or destroy any copy of this message.
>
>
>
>
>
>
>
> From: Marko Vrgotic <M.Vrgotic(a)activevideo.com>
> Date: Monday, 8 March 2021 at 15:34
> To: Yedidyah Bar David <didi(a)redhat.com>
> Cc: users(a)ovirt.org <users(a)ovirt.org>
> Subject: Re: [ovirt-users] Re: Upgrade from 4.3.5 to 4.3.10 HE Host issue
>
> Hi Didi,
>
>
>
> Please find the attached logs from Host and Engine.
>
>
>
> Host ovirt-sj-02 HE Undeploy 2021-03-08 14:15:52 till 2021-03-08 14:18:24
>
>
>
>
>
> Host ovirt-sj-02 HE Deploy 2021-03-08 14:20:51 till 2021-03-08 14:23:22
>
>
>
> I do see errors in the agent and broker and vdsm, but I do not see why it happened.
>
>
>
> Thank you for helping, let me know if any additional files are needed.
>
>
>
>
>
> -----
>
> kind regards/met vriendelijke groeten
>
>
>
> Marko Vrgotic
> Sr. System Engineer @ System Administration
>
>
> ActiveVideo
>
> o: +31 (35) 6774131
>
> m: +31 (65) 5734174
>
> e: m.vrgotic(a)activevideo.com
> w: www.activevideo.com
>
>
>
> ActiveVideo Networks BV. Mediacentrum 3745 Joop van den Endeplein 1.1217 WJ Hilversum, The Netherlands. The information contained in this message may be legally privileged and confidential. It is intended to be read only by the individual or entity to whom it is addressed or by their designee. If the reader of this message is not the intended recipient, you are on notice that any distribution of this message, in any form, is strictly prohibited. If you have received this message in error, please immediately notify the sender and/or ActiveVideo Networks, LLC by telephone at +1 408.931.9200 and delete or destroy any copy of this message.
>
>
>
>
>
>
>
> From: Yedidyah Bar David <didi(a)redhat.com>
> Date: Monday, 8 March 2021 at 09:25
> To: Marko Vrgotic <M.Vrgotic(a)activevideo.com>
> Cc: users(a)ovirt.org <users(a)ovirt.org>
> Subject: Re: [ovirt-users] Re: Upgrade from 4.3.5 to 4.3.10 HE Host issue
>
> ***CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you recognize the sender!!!***
>
> Hi,
>
> On Mon, Mar 8, 2021 at 10:13 AM Marko Vrgotic <M.Vrgotic(a)activevideo.com> wrote:
> >
> > I cannot find the reason why the re-Deployment on this Hosts fails, as it was already deployed on it before.
> >
> > No errors, found int the deployment, but it seems half done, based on messages I sent in previous email.
>
> Please check/share all relevant logs. Thanks. Can be all of /var/log
> from engine and hosts, and at least:
>
> /var/log/ovirt-engine/engine.log
>
> /var/log/vdsm/*
>
> /var/log/ovirt-hosted-engine-ha/*
>
> Best regards,
> --
> Didi
--
Didi
3 years, 8 months
Install oVirt using the Cockpit wizard
by rajkumar madhu
Hi Team,
I have installed oVirt repo in my centos8 server ( dnf install
https://resources.ovirt.org/pub/yum-repo/ovirt-release44.rpm )
then while executing Engine-setup command in my server .I am getting the
below error message.Could you please check and give me some suggestions to
resolve the issue .
engine-setup
[ INFO ] Stage: Initializing
[ INFO ] Stage: Environment setup
Configuration files:
/etc/ovirt-engine-setup.conf.d/10-packaging-jboss.conf,
/etc/ovirt-engine-setup.conf.d/10-packaging.conf,
/etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf
Log file:
/var/log/ovirt-engine/setup/ovirt-engine-setup-20210412024224-fvaq4e.log
Version: otopi-1.9.4 (otopi-1.9.4-1.el8)
[ ERROR ] Failed to execute stage 'Environment setup': Cannot connect to
ovirt cinderlib database using existing credentials:
ovirt_cinderlib@localhost:5432
[ INFO ] Stage: Clean up
Log file is located at
/var/log/ovirt-engine/setup/ovirt-engine-setup-20210412024224-fvaq4e.log
[ INFO ] Generating answer file
'/var/lib/ovirt-engine/setup/answers/20210412024227-setup.conf'
[ INFO ] Stage: Pre-termination
[ INFO ] Stage: Termination
[ ERROR ] Execution of setup failed
Regards,
Rajkumar M
9176772077.
3 years, 8 months
Ovirt with Oracle Linux 8.3 & Cockpit - Direct Luns
by hansontodd@gmail.com
Is Ovirt compatible with Oracle Linux 8.3? I have KVM & Cockpit running on Oracle Linux 8.3 but don't see a way to map direct luns (like RDM's) to the VM in the Cockpit interface.
Do I need to move back to Centos 7.x or RHEL 7.x with Ovirt to be able to use direct luns? Can Ovirt and Cockpit coexist?
3 years, 8 months