Re: Engine Upgrade to 4.4.5 Version Number Question
by Yedidyah Bar David
On Sat, Apr 10, 2021 at 8:08 AM Nur Imam Febrianto <nur_imam(a)outlook.com> wrote:
>
> Here is the setup logs.
Do you have "everything" (engine, dwh, etc.) on the same machine, or
different stuff set up on different machines?
Any chance you manually edited
/etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf for any
reason?
I see there:
2021-03-18 22:45:25,952+0700 DEBUG otopi.context
context.dumpEnvironment:775 ENV
OVESETUP_ENGINE_CORE/enable=bool:'False'
This normally happens only if the engine was not configured on this machine.
Best regards,
>
> Thanks.
>
>
>
> From: Yedidyah Bar David
> Sent: 07 April 2021 12:36
> To: Nur Imam Febrianto
> Cc: oVirt Users
> Subject: Re: [ovirt-users] Engine Upgrade to 4.4.5 Version Number Question
>
>
>
> On Tue, Apr 6, 2021 at 6:49 PM Nur Imam Febrianto <nur_imam(a)outlook.com> wrote:
> >
> > I’m currently trying to upgrade our cluster from 4.4.4 to 4.4.5. All using oVirt Node.
> >
> > All host are successfully upgraded to 4.4.5 (I can see the image layer was changed from 4.4.4 to 4.4.5, cockpit also shows same version), but in Engine VM, already run engine-upgrade-check, upgrading ovirt-setup, running engine-setup successfully, reboot the engine but whenever I open the engine web page, it still showing Version 4.4.4.5-1.el8. Is this version correct ? Because my second Cluster that use their own hosted engine shows different version (4.4.5.11-1.el8). Anybody having a same issues like me ?
>
> Please share the setup log (in /var/log/ovirt-engine/setup). Thanks.
>
> Best regards,
> --
> Didi
>
>
--
Didi
3 years, 8 months
Re: Engine Upgrade to 4.4.5 Version Number Question
by Yedidyah Bar David
On Tue, Apr 6, 2021 at 6:49 PM Nur Imam Febrianto <nur_imam(a)outlook.com> wrote:
>
> I’m currently trying to upgrade our cluster from 4.4.4 to 4.4.5. All using oVirt Node.
>
> All host are successfully upgraded to 4.4.5 (I can see the image layer was changed from 4.4.4 to 4.4.5, cockpit also shows same version), but in Engine VM, already run engine-upgrade-check, upgrading ovirt-setup, running engine-setup successfully, reboot the engine but whenever I open the engine web page, it still showing Version 4.4.4.5-1.el8. Is this version correct ? Because my second Cluster that use their own hosted engine shows different version (4.4.5.11-1.el8). Anybody having a same issues like me ?
Please share the setup log (in /var/log/ovirt-engine/setup). Thanks.
Best regards,
--
Didi
3 years, 8 months
How to detach FC Storage Domain
by Miguel Garcia
I have added some FC storage domains into incorrect Data Center, I'm trying to detach them by moving to Storage -> Storage domain select storage domain -> Data Center tab and click on detach button then got the following error: Failing detaching storage domain from Data Center
In log files so far found next lines:
2021-03-25 22:58:41,621-04 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.DetachStorageDomainVDSCommand] (EE-ManagedThreadFactory-engine-Thread-2636) [0a08dff7-13c4-49c2-b451-0dea136941ec] START, DetachStorageDomainVDSCommand( DetachStorageDomainVDSCommandParameters:{storagePoolId='68b8d8e9-af51-457d-9b9d-36e37ad60d55', ignoreFailoverLimit='false', storageDomainId='a1f6a779-a65f-427c-b034-174e1150361d', masterDomainId='00000000-0000-0000-0000-000000000000', masterVersion='1', force='false'}), log id: 42974429
2021-03-25 22:58:42,941-04 ERROR [org.ovirt.engine.core.vdsbroker.irsbroker.DetachStorageDomainVDSCommand] (EE-ManagedThreadFactory-engine-Thread-2636) [0a08dff7-13c4-49c2-b451-0dea136941ec] Failed in 'DetachStorageDomainVDS' method
2021-03-25 22:58:42,944-04 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-2636) [0a08dff7-13c4-49c2-b451-0dea136941ec] EVENT_ID: IRS_BROKER_COMMAND_FAILURE(10,803), VDSM command DetachStorageDomainVDS failed: Storage domain does not exist: (u'a1f6a779-a65f-427c-b034-174e1150361d',)
2021-03-25 22:58:42,945-04 ERROR [org.ovirt.engine.core.vdsbroker.irsbroker.DetachStorageDomainVDSCommand] (EE-ManagedThreadFactory-engine-Thread-2636) [0a08dff7-13c4-49c2-b451-0dea136941ec] Command 'DetachStorageDomainVDSCommand( DetachStorageDomainVDSCommandParameters:{storagePoolId='68b8d8e9-af51-457d-9b9d-36e37ad60d55', ignoreFailoverLimit='false', storageDomainId='a1f6a779-a65f-427c-b034-174e1150361d', masterDomainId='00000000-0000-0000-0000-000000000000', masterVersion='1', force='false'})' execution failed: IRSGenericException: IRSErrorException: Failed to DetachStorageDomainVDS, error = Storage domain does not exist: (u'a1f6a779-a65f-427c-b034-174e1150361d',), code = 358
Seems that the storage domain does not exists. Is there a way to force detach action?
Thanks in advance
3 years, 8 months
extending vm disk when vm is up
by Nathanaël Blanchet
Hello,
Why it is not possible to extend disk size into UI when a vm is up while
it is possible to do the same with API/SDK/ansible?
--
Nathanaël Blanchet
Supervision réseau
SIRE
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5
Tél. 33 (0)4 67 54 84 55
Fax 33 (0)4 67 54 84 14
blanchet(a)abes.fr
3 years, 8 months
Destroyed VM blocking hosts/filling logs
by davidk@riavera.com
Hello,
I've somehow gotten one of my VMs stuck in a state that ovirt seems to be rather confused about its
existence of now. I'm running oVirt 4.3.10 and using oVirt Node on all the hosts.
My engine and host event logs are now filling up very rapidly with this error:
VDSM node217 command DestroyVDS failed: General Exception: ("'1048576'",)
I was playing with hugetable support, and that error number or string looks suspiciously
like the "hugetable size" custom property I set on the VM.
This VM was migrated to another host at one point as well, and now that host is also
generating the same error as well.
When I try to move these hosts to maintenance mode, they get stuck in "Preparing for
Maintenance" while it tries to migrate/deal with the VM that's not there any more.
Forcibly rebooting the hosts does not change anything. The VM state/host seems to be
captured somewhere persistent in this case.
The VM in question is not running, and I can start it up on another host successfully,
but ovirt still thinks it exists on the other 2 hosts no matter what I do.
Is there perhaps some way to delete it from the engine database directly to straighten
things out?
Here's a dump of the vdsm log on one of the hosts. I haven't been able to pinpoint what
the exact issue is or how to fix it, but hopefully someone here will have seen this before?
2021-04-03 04:40:35,515+0000 INFO (jsonrpc/1) [api.virt] START destroy(gracefulAttempts=1) from=::ffff:10.100.0.210,58150, vmId=58abf0cf-d7b9-4067-a86a-e619928368e7 (api:48)
2021-04-03 04:40:35,516+0000 INFO (jsonrpc/1) [virt.vm] (vmId='58abf0cf-d7b9-4067-a86a-e619928368e7') Release VM resources (vm:5186)
2021-04-03 04:40:35,516+0000 WARN (jsonrpc/1) [virt.vm] (vmId='58abf0cf-d7b9-4067-a86a-e619928368e7') trying to set state to Powering down when already Down (vm:626)
2021-04-03 04:40:35,516+0000 INFO (jsonrpc/1) [virt.vm] (vmId='58abf0cf-d7b9-4067-a86a-e619928368e7') Stopping connection (guestagent:455)
2021-04-03 04:40:35,517+0000 INFO (jsonrpc/1) [vdsm.api] START teardownImage(sdUUID='a08af6be-3802-4bb1-9fa5-4b6a10227290', spUUID='78dc095a-5238-11e8-b8bf-00163e6a7af9', imgUUID='9c896907-59b0-4983-9478-b36b2c2eb01e', volUUID=None) from=::ffff:10.100.0.210,58150, task_id=fc946d20-126a-4fd0-9078-91
4b4a64b1d9 (api:48)
2021-04-03 04:40:35,518+0000 INFO (jsonrpc/1) [storage.StorageDomain] Removing image rundir link u'/var/run/vdsm/storage/a08af6be-3802-4bb1-9fa5-4b6a10227290/9c896907-59b0-4983-9478-b36b2c2eb01e' (fileSD:592)
2021-04-03 04:40:35,518+0000 INFO (jsonrpc/1) [vdsm.api] FINISH teardownImage return=None from=::ffff:10.100.0.210,58150, task_id=fc946d20-126a-4fd0-9078-914b4a64b1d9 (api:54)
2021-04-03 04:40:35,519+0000 INFO (jsonrpc/1) [vdsm.api] START teardownImage(sdUUID='b891448d-dd92-4a7b-a51a-22abc3d7da67', spUUID='78dc095a-5238-11e8-b8bf-00163e6a7af9', imgUUID='c0e95483-35f1-4a61-958e-4e308b70d3f8', volUUID=None) from=::ffff:10.100.0.210,58150, task_id=77c0fdca-e13a-44b5-9a00-29
0522b194b2 (api:48)
2021-04-03 04:40:35,520+0000 INFO (jsonrpc/1) [storage.StorageDomain] Removing image rundir link u'/var/run/vdsm/storage/b891448d-dd92-4a7b-a51a-22abc3d7da67/c0e95483-35f1-4a61-958e-4e308b70d3f8' (fileSD:592)
2021-04-03 04:40:35,520+0000 INFO (jsonrpc/1) [vdsm.api] FINISH teardownImage return=None from=::ffff:10.100.0.210,58150, task_id=77c0fdca-e13a-44b5-9a00-290522b194b2 (api:54)
2021-04-03 04:40:35,521+0000 INFO (jsonrpc/1) [virt.vm] (vmId='58abf0cf-d7b9-4067-a86a-e619928368e7') Stopping connection (guestagent:455)
2021-04-03 04:40:35,521+0000 WARN (jsonrpc/1) [root] File: /var/lib/libvirt/qemu/channels/58abf0cf-d7b9-4067-a86a-e619928368e7.ovirt-guest-agent.0 already removed (fileutils:54)
2021-04-03 04:40:35,521+0000 WARN (jsonrpc/1) [root] Attempting to remove a non existing network: ovirtmgmt/58abf0cf-d7b9-4067-a86a-e619928368e7 (libvirtnetwork:198)
2021-04-03 04:40:35,522+0000 WARN (jsonrpc/1) [root] Attempting to remove a non existing net user: ovirtmgmt/58abf0cf-d7b9-4067-a86a-e619928368e7 (libvirtnetwork:205)
2021-04-03 04:40:35,526+0000 WARN (jsonrpc/1) [root] Attempting to remove a non existing network: ovirtmgmt/58abf0cf-d7b9-4067-a86a-e619928368e7 (libvirtnetwork:198)
2021-04-03 04:40:35,526+0000 WARN (jsonrpc/1) [root] Attempting to remove a non existing net user: ovirtmgmt/58abf0cf-d7b9-4067-a86a-e619928368e7 (libvirtnetwork:205)
2021-04-03 04:40:35,527+0000 WARN (jsonrpc/1) [root] File: /var/lib/libvirt/qemu/channels/58abf0cf-d7b9-4067-a86a-e619928368e7.org.qemu.guest_agent.0 already removed (fileutils:54)
2021-04-03 04:40:35,528+0000 WARN (jsonrpc/1) [virt.vm] (vmId='58abf0cf-d7b9-4067-a86a-e619928368e7') timestamp already removed from stats cache (vm:2445)
2021-04-03 04:40:35,531+0000 ERROR (jsonrpc/1) [api] FINISH destroy error='1048576' (api:134)
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 124, in method
ret = func(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/vdsm/API.py", line 333, in destroy
res = self.vm.destroy(gracefulAttempts)
File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 5283, in destroy
result = self.doDestroy(gracefulAttempts, reason)
File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 5302, in doDestroy
return self.releaseVm(gracefulAttempts)
File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 5207, in releaseVm
self._cleanup()
File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2382, in _cleanup
self._cleanup_hugepages()
File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2393, in _cleanup_hugepages
self.nr_hugepages, self.hugepagesz
File "/usr/lib/python2.7/site-packages/vdsm/hugepages.py", line 243, in calculate_required_deallocation
_preallocated_hugepages(vm_hugepagesz))
File "/usr/lib/python2.7/site-packages/vdsm/hugepages.py", line 262, in _preallocated_hugepages
kernel_args['hugepagesz']
File "/usr/lib/python2.7/site-packages/vdsm/hugepages.py", line 291, in _cmdline_hugepagesz_to_kb
}[cmdline]
KeyError: '1048576'
2021-04-03 04:40:35,531+0000 INFO (jsonrpc/1) [api.virt] FINISH destroy return={'status': {'message': 'General Exception: ("\'1048576\'",)', 'code': 100}} from=::ffff:10.100.0.210,58150, vmId=58abf0cf-d7b9-4067-a86a-e619928368e7 (api:54)
2021-04-03 04:40:35,532+0000 INFO (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC call VM.destroy failed (error 100) in 0.02 seconds (__init__:312)
3 years, 8 months
[ANN] oVirt 4.4.6 Third Release Candidate is now available for testing
by Lev Veyde
oVirt 4.4.6 Third Release Candidate is now available for testing
The oVirt Project is pleased to announce the availability of oVirt 4.4.6
Third Release Candidate for testing, as of April 8th, 2021.
This update is the sixth in a series of stabilization updates to the 4.4
series.
How to prevent hosts entering emergency mode after upgrade from oVirt 4.4.1
Note: Upgrading from 4.4.2 GA or later should not require re-doing these
steps, if already performed while upgrading from 4.4.1 to 4.4.2 GA. These
are only required to be done once.
Due to Bug 1837864 <https://bugzilla.redhat.com/show_bug.cgi?id=1837864> -
Host enter emergency mode after upgrading to latest build
If you have your root file system on a multipath device on your hosts you
should be aware that after upgrading from 4.4.1 to 4.4.6 you may get your
host entering emergency mode.
In order to prevent this be sure to upgrade oVirt Engine first, then on
your hosts:
1.
Remove the current lvm filter while still on 4.4.1, or in emergency mode
(if rebooted).
2.
Reboot.
3.
Upgrade to 4.4.6 (redeploy in case of already being on 4.4.6).
4.
Run vdsm-tool config-lvm-filter to confirm there is a new filter in
place.
5.
Only if not using oVirt Node:
- run "dracut --force --add multipath” to rebuild initramfs with the
correct filter configuration
6.
Reboot.
Documentation
-
If you want to try oVirt as quickly as possible, follow the instructions
on the Download <https://ovirt.org/download/> page.
-
For complete installation, administration, and usage instructions, see
the oVirt Documentation <https://ovirt.org/documentation/>.
-
For upgrading from a previous version, see the oVirt Upgrade Guide
<https://ovirt.org/documentation/upgrade_guide/>.
-
For a general overview of oVirt, see About oVirt
<https://ovirt.org/community/about.html>.
Important notes before you try it
Please note this is a pre-release build.
The oVirt Project makes no guarantees as to its suitability or usefulness.
This pre-release must not be used in production.
Installation instructions
For installation instructions and additional information please refer to:
https://ovirt.org/documentation/
This release is available now on x86_64 architecture for:
* Red Hat Enterprise Linux 8.3 or newer
* CentOS Linux (or similar) 8.3 or newer
This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
for:
* Red Hat Enterprise Linux 8.3 or newer
* CentOS Linux (or similar) 8.3 or newer
* oVirt Node 4.4 based on CentOS Linux 8.3 (available for x86_64 only)
See the release notes [1] for installation instructions and a list of new
features and bugs fixed.
Notes:
- oVirt Appliance is already available for CentOS Linux 8
- oVirt Node NG is already available for CentOS Linux 8
- We found a few issues while testing on CentOS Stream so we are still
basing oVirt 4.4.6 Node and Appliance on CentOS Linux.
Additional Resources:
* Read more about the oVirt 4.4.6 release highlights:
http://www.ovirt.org/release/4.4.6/
* Get more oVirt project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/
[1] http://www.ovirt.org/release/4.4.6/
[2] http://resources.ovirt.org/pub/ovirt-4.4-pre/iso/
--
Lev Veyde
Senior Software Engineer, RHCE | RHCVA | MCITP
Red Hat Israel
<https://www.redhat.com>
lev(a)redhat.com | lveyde(a)redhat.com
<https://red.ht/sig>
TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
3 years, 8 months
Q: Sparsify/fstrim don't reduce actual disk image size
by Andrei Verovski
Hi !
I have VM (under oVirt) with single disk thin provision (~600 GB) running NextCloud on Debian 9.
Right now VM HD is almost empty. Unfortunately, it occupies 584 GB (virtual size = 600 GB)
All partition (except swap and boot) are EXT4 with discard option. in oVirt “enable discard = on”.
# fstrim -av runs successfully:
/var: 477.6 GiB (512851144704 bytes) trimmed on /dev/mapper/vg--system-lv4--data
/boot: 853.8 MiB (895229952 bytes) trimmed on /dev/mapper/vg--system-lv2--boot
/: 88.4 GiB (94888611840 bytes) trimmed on /dev/mapper/vg--system-lv3—sys
When fstrim runs again, it trims zero. I even run “Sparsify” in oVirt. Unfortunately, actual size is still 584 GB.
Here is /etc/fstab
/dev/mapper/vg--system-lv3--sys / ext4 discard,noatime,nodiratime,errors=remount-ro 0 1
/dev/mapper/vg--system-lv2--boot /boot ext2 defaults 0 2
/dev/mapper/vg--system-lv4--data /var ext4 discard,noatime,nodiratime 0 2
/dev/mapper/vg--system-lv1--swap none swap sw 0 0
When disk was partitioned/formatted, swap and boot were created first and positioned at the beginning.
What is wrong here? Is it possible to fix all this ?
Thanks in advance.
Andrei
3 years, 8 months
cannot export - "r.original_template is undefined"
by Diggy Mc
I cannot export a VM to a data domain. I receive the error:
Export VM Failed
r.original_template is undefined
The VM originated as an OVA provided by a third party. After importing the OVA and customizing the VM, I wanted to export it for backup purposes, but cannot.
I am running oVirt 4.4.4. You can download the OVA directly from ERPNext.org if that helps troubleshooting. Download the "Production Image" from here: https://erpnext.org/get-started
Any help is appreciated and let me know if additional information is needed.
3 years, 8 months
oVirt longevity after CentOS 8, RHV changes
by David White
I'm replying to Thomas's thread below, but am creating a new subject so as not to hijack the original thread.
I'm sure that this topic has come up before.
I first joined this list last fall, when I began planning and testing with oVirt, but as of the past few weeks, I'm paying closer attention to the mailing list now that I'm actually using oVirt and am getting ready to deploy to a production environment.
I'll also try to jump in and help other people as time permits and as my experience grow.
I echo Thomas's concerns here. While I'm thankful for Red Hat's gesture to allow people to use up to 16 Red Hat installs at no charge, I'm concerned about the longevity of oVirt, now that Red Hat is no longer going to support RHV going forward.
What is the benefit to Red Hat / IBM of supporting this platform now that it is no longer being commercialized as a Red Hat product? What is to prevent Red Hat from pulling the plug on this project, similar to what happened to CentOS 8?
As a user of oVirt (4.5, installed on Red Hat 8.3), how can I and others help to contribute to the project to ensure its longevity? Or should I really just go find an alternative in the future? (I had been planning to use oVirt for a while, and did some testing last fall, so the announcement of RHV's (commercial) demise was poor timing for me, because I don't have time to switch gears and change my plans to use something else, like Proxmox or something.
From what I've seen, this is a great product, and I guess I can understand Red Hat's decision to pull the plug on the commercial project, now that OpenShift supports full VMs. But my understanding is that OpenShift is a lot more complicated and requires more resources. I really don't need a full kubernetes environment. I just need a stable virtualization platform.
Sent with ProtonMail Secure Email.
‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
On Thursday, April 1, 2021 5:44 PM, Thomas Hoberg <thomas(a)hoberg.net> wrote:
> I personally consider the fact that you gave up on 4.3/CentOS7 before CentOS 8 could have even been remotely reliable to run "a free open-source virtualization solution for your entire enterprise", a rather violent break of trust.
>
> I understand Redhat's motivation with Python 2/3 etc., but users just don't. Please just try for a minute to view this from a user's perspective.
>
> With CentOS 7 supported until 2024, we naturally expect the added value on top via oVirt to persist just as long.
>
> And with CentOS 8 support lasting until the end of this year, oVirt 4.4 can't be considered "Petrus" or a rock to build on.
>
> Most of us run oVirt simply because we are most interested in the VMs it runs (tenants paying rent).
>
> We're not interested in keeping oVirt itself stable and from failing after any update to the house of cards.
>
> And yes, by now I am sorry to have chosen oVirt at all, finding that 4.3 was abandonend before 4.4 or the CentOS 8 below was even stable and long before the base OS ran out of support.
>
> To the users out there oVirt is a platform, a tool, not a means to itself.
3 years, 8 months