ipxe support with Q35 chipset and UEFI
by Gianluca Cecchi
Hello,
I'm doing some tests with iPXE (downloaded latest version cloning the git
repo some days ago).
I'm using oVirt 4.4.5 with VMs configured with different chipset/firmware
type.
It seems that using dhcpd.conf directive of type
if exists user-class and option user-class = "iPXE" {
filename "http://my_http_server/...";
}
else {
...
}
the VM boot catches it when I use Q35 Chipset with BIOS, while it goes
inside the "else" section if using Q35 Chipset with UEFI (not the
SecureBoot one)
Does this mean that the Q35 UEFI doesn't support iPXE?
BTW: if anyone has suggestions about an utility that can let me boot via
network and give the user a general menu from which he/she can go and
choose standard pxe with bios or uefi boot, to be able to install both
Linux based systems and other ones, such as ESXi hosts, both with BIOS and
UEFI, it is welcome
Gianluca
4 years
Re: Engine Upgrade to 4.4.5 Version Number Question
by Yedidyah Bar David
On Sat, Apr 10, 2021 at 8:08 AM Nur Imam Febrianto <nur_imam(a)outlook.com> wrote:
>
> Here is the setup logs.
Do you have "everything" (engine, dwh, etc.) on the same machine, or
different stuff set up on different machines?
Any chance you manually edited
/etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf for any
reason?
I see there:
2021-03-18 22:45:25,952+0700 DEBUG otopi.context
context.dumpEnvironment:775 ENV
OVESETUP_ENGINE_CORE/enable=bool:'False'
This normally happens only if the engine was not configured on this machine.
Best regards,
>
> Thanks.
>
>
>
> From: Yedidyah Bar David
> Sent: 07 April 2021 12:36
> To: Nur Imam Febrianto
> Cc: oVirt Users
> Subject: Re: [ovirt-users] Engine Upgrade to 4.4.5 Version Number Question
>
>
>
> On Tue, Apr 6, 2021 at 6:49 PM Nur Imam Febrianto <nur_imam(a)outlook.com> wrote:
> >
> > I’m currently trying to upgrade our cluster from 4.4.4 to 4.4.5. All using oVirt Node.
> >
> > All host are successfully upgraded to 4.4.5 (I can see the image layer was changed from 4.4.4 to 4.4.5, cockpit also shows same version), but in Engine VM, already run engine-upgrade-check, upgrading ovirt-setup, running engine-setup successfully, reboot the engine but whenever I open the engine web page, it still showing Version 4.4.4.5-1.el8. Is this version correct ? Because my second Cluster that use their own hosted engine shows different version (4.4.5.11-1.el8). Anybody having a same issues like me ?
>
> Please share the setup log (in /var/log/ovirt-engine/setup). Thanks.
>
> Best regards,
> --
> Didi
>
>
--
Didi
4 years
Re: Engine Upgrade to 4.4.5 Version Number Question
by Yedidyah Bar David
On Tue, Apr 6, 2021 at 6:49 PM Nur Imam Febrianto <nur_imam(a)outlook.com> wrote:
>
> I’m currently trying to upgrade our cluster from 4.4.4 to 4.4.5. All using oVirt Node.
>
> All host are successfully upgraded to 4.4.5 (I can see the image layer was changed from 4.4.4 to 4.4.5, cockpit also shows same version), but in Engine VM, already run engine-upgrade-check, upgrading ovirt-setup, running engine-setup successfully, reboot the engine but whenever I open the engine web page, it still showing Version 4.4.4.5-1.el8. Is this version correct ? Because my second Cluster that use their own hosted engine shows different version (4.4.5.11-1.el8). Anybody having a same issues like me ?
Please share the setup log (in /var/log/ovirt-engine/setup). Thanks.
Best regards,
--
Didi
4 years
How to detach FC Storage Domain
by Miguel Garcia
I have added some FC storage domains into incorrect Data Center, I'm trying to detach them by moving to Storage -> Storage domain select storage domain -> Data Center tab and click on detach button then got the following error: Failing detaching storage domain from Data Center
In log files so far found next lines:
2021-03-25 22:58:41,621-04 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.DetachStorageDomainVDSCommand] (EE-ManagedThreadFactory-engine-Thread-2636) [0a08dff7-13c4-49c2-b451-0dea136941ec] START, DetachStorageDomainVDSCommand( DetachStorageDomainVDSCommandParameters:{storagePoolId='68b8d8e9-af51-457d-9b9d-36e37ad60d55', ignoreFailoverLimit='false', storageDomainId='a1f6a779-a65f-427c-b034-174e1150361d', masterDomainId='00000000-0000-0000-0000-000000000000', masterVersion='1', force='false'}), log id: 42974429
2021-03-25 22:58:42,941-04 ERROR [org.ovirt.engine.core.vdsbroker.irsbroker.DetachStorageDomainVDSCommand] (EE-ManagedThreadFactory-engine-Thread-2636) [0a08dff7-13c4-49c2-b451-0dea136941ec] Failed in 'DetachStorageDomainVDS' method
2021-03-25 22:58:42,944-04 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-2636) [0a08dff7-13c4-49c2-b451-0dea136941ec] EVENT_ID: IRS_BROKER_COMMAND_FAILURE(10,803), VDSM command DetachStorageDomainVDS failed: Storage domain does not exist: (u'a1f6a779-a65f-427c-b034-174e1150361d',)
2021-03-25 22:58:42,945-04 ERROR [org.ovirt.engine.core.vdsbroker.irsbroker.DetachStorageDomainVDSCommand] (EE-ManagedThreadFactory-engine-Thread-2636) [0a08dff7-13c4-49c2-b451-0dea136941ec] Command 'DetachStorageDomainVDSCommand( DetachStorageDomainVDSCommandParameters:{storagePoolId='68b8d8e9-af51-457d-9b9d-36e37ad60d55', ignoreFailoverLimit='false', storageDomainId='a1f6a779-a65f-427c-b034-174e1150361d', masterDomainId='00000000-0000-0000-0000-000000000000', masterVersion='1', force='false'})' execution failed: IRSGenericException: IRSErrorException: Failed to DetachStorageDomainVDS, error = Storage domain does not exist: (u'a1f6a779-a65f-427c-b034-174e1150361d',), code = 358
Seems that the storage domain does not exists. Is there a way to force detach action?
Thanks in advance
4 years
extending vm disk when vm is up
by Nathanaël Blanchet
Hello,
Why it is not possible to extend disk size into UI when a vm is up while
it is possible to do the same with API/SDK/ansible?
--
Nathanaël Blanchet
Supervision réseau
SIRE
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5
Tél. 33 (0)4 67 54 84 55
Fax 33 (0)4 67 54 84 14
blanchet(a)abes.fr
4 years
Destroyed VM blocking hosts/filling logs
by davidk@riavera.com
Hello,
I've somehow gotten one of my VMs stuck in a state that ovirt seems to be rather confused about its
existence of now. I'm running oVirt 4.3.10 and using oVirt Node on all the hosts.
My engine and host event logs are now filling up very rapidly with this error:
VDSM node217 command DestroyVDS failed: General Exception: ("'1048576'",)
I was playing with hugetable support, and that error number or string looks suspiciously
like the "hugetable size" custom property I set on the VM.
This VM was migrated to another host at one point as well, and now that host is also
generating the same error as well.
When I try to move these hosts to maintenance mode, they get stuck in "Preparing for
Maintenance" while it tries to migrate/deal with the VM that's not there any more.
Forcibly rebooting the hosts does not change anything. The VM state/host seems to be
captured somewhere persistent in this case.
The VM in question is not running, and I can start it up on another host successfully,
but ovirt still thinks it exists on the other 2 hosts no matter what I do.
Is there perhaps some way to delete it from the engine database directly to straighten
things out?
Here's a dump of the vdsm log on one of the hosts. I haven't been able to pinpoint what
the exact issue is or how to fix it, but hopefully someone here will have seen this before?
2021-04-03 04:40:35,515+0000 INFO (jsonrpc/1) [api.virt] START destroy(gracefulAttempts=1) from=::ffff:10.100.0.210,58150, vmId=58abf0cf-d7b9-4067-a86a-e619928368e7 (api:48)
2021-04-03 04:40:35,516+0000 INFO (jsonrpc/1) [virt.vm] (vmId='58abf0cf-d7b9-4067-a86a-e619928368e7') Release VM resources (vm:5186)
2021-04-03 04:40:35,516+0000 WARN (jsonrpc/1) [virt.vm] (vmId='58abf0cf-d7b9-4067-a86a-e619928368e7') trying to set state to Powering down when already Down (vm:626)
2021-04-03 04:40:35,516+0000 INFO (jsonrpc/1) [virt.vm] (vmId='58abf0cf-d7b9-4067-a86a-e619928368e7') Stopping connection (guestagent:455)
2021-04-03 04:40:35,517+0000 INFO (jsonrpc/1) [vdsm.api] START teardownImage(sdUUID='a08af6be-3802-4bb1-9fa5-4b6a10227290', spUUID='78dc095a-5238-11e8-b8bf-00163e6a7af9', imgUUID='9c896907-59b0-4983-9478-b36b2c2eb01e', volUUID=None) from=::ffff:10.100.0.210,58150, task_id=fc946d20-126a-4fd0-9078-91
4b4a64b1d9 (api:48)
2021-04-03 04:40:35,518+0000 INFO (jsonrpc/1) [storage.StorageDomain] Removing image rundir link u'/var/run/vdsm/storage/a08af6be-3802-4bb1-9fa5-4b6a10227290/9c896907-59b0-4983-9478-b36b2c2eb01e' (fileSD:592)
2021-04-03 04:40:35,518+0000 INFO (jsonrpc/1) [vdsm.api] FINISH teardownImage return=None from=::ffff:10.100.0.210,58150, task_id=fc946d20-126a-4fd0-9078-914b4a64b1d9 (api:54)
2021-04-03 04:40:35,519+0000 INFO (jsonrpc/1) [vdsm.api] START teardownImage(sdUUID='b891448d-dd92-4a7b-a51a-22abc3d7da67', spUUID='78dc095a-5238-11e8-b8bf-00163e6a7af9', imgUUID='c0e95483-35f1-4a61-958e-4e308b70d3f8', volUUID=None) from=::ffff:10.100.0.210,58150, task_id=77c0fdca-e13a-44b5-9a00-29
0522b194b2 (api:48)
2021-04-03 04:40:35,520+0000 INFO (jsonrpc/1) [storage.StorageDomain] Removing image rundir link u'/var/run/vdsm/storage/b891448d-dd92-4a7b-a51a-22abc3d7da67/c0e95483-35f1-4a61-958e-4e308b70d3f8' (fileSD:592)
2021-04-03 04:40:35,520+0000 INFO (jsonrpc/1) [vdsm.api] FINISH teardownImage return=None from=::ffff:10.100.0.210,58150, task_id=77c0fdca-e13a-44b5-9a00-290522b194b2 (api:54)
2021-04-03 04:40:35,521+0000 INFO (jsonrpc/1) [virt.vm] (vmId='58abf0cf-d7b9-4067-a86a-e619928368e7') Stopping connection (guestagent:455)
2021-04-03 04:40:35,521+0000 WARN (jsonrpc/1) [root] File: /var/lib/libvirt/qemu/channels/58abf0cf-d7b9-4067-a86a-e619928368e7.ovirt-guest-agent.0 already removed (fileutils:54)
2021-04-03 04:40:35,521+0000 WARN (jsonrpc/1) [root] Attempting to remove a non existing network: ovirtmgmt/58abf0cf-d7b9-4067-a86a-e619928368e7 (libvirtnetwork:198)
2021-04-03 04:40:35,522+0000 WARN (jsonrpc/1) [root] Attempting to remove a non existing net user: ovirtmgmt/58abf0cf-d7b9-4067-a86a-e619928368e7 (libvirtnetwork:205)
2021-04-03 04:40:35,526+0000 WARN (jsonrpc/1) [root] Attempting to remove a non existing network: ovirtmgmt/58abf0cf-d7b9-4067-a86a-e619928368e7 (libvirtnetwork:198)
2021-04-03 04:40:35,526+0000 WARN (jsonrpc/1) [root] Attempting to remove a non existing net user: ovirtmgmt/58abf0cf-d7b9-4067-a86a-e619928368e7 (libvirtnetwork:205)
2021-04-03 04:40:35,527+0000 WARN (jsonrpc/1) [root] File: /var/lib/libvirt/qemu/channels/58abf0cf-d7b9-4067-a86a-e619928368e7.org.qemu.guest_agent.0 already removed (fileutils:54)
2021-04-03 04:40:35,528+0000 WARN (jsonrpc/1) [virt.vm] (vmId='58abf0cf-d7b9-4067-a86a-e619928368e7') timestamp already removed from stats cache (vm:2445)
2021-04-03 04:40:35,531+0000 ERROR (jsonrpc/1) [api] FINISH destroy error='1048576' (api:134)
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 124, in method
ret = func(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/vdsm/API.py", line 333, in destroy
res = self.vm.destroy(gracefulAttempts)
File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 5283, in destroy
result = self.doDestroy(gracefulAttempts, reason)
File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 5302, in doDestroy
return self.releaseVm(gracefulAttempts)
File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 5207, in releaseVm
self._cleanup()
File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2382, in _cleanup
self._cleanup_hugepages()
File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 2393, in _cleanup_hugepages
self.nr_hugepages, self.hugepagesz
File "/usr/lib/python2.7/site-packages/vdsm/hugepages.py", line 243, in calculate_required_deallocation
_preallocated_hugepages(vm_hugepagesz))
File "/usr/lib/python2.7/site-packages/vdsm/hugepages.py", line 262, in _preallocated_hugepages
kernel_args['hugepagesz']
File "/usr/lib/python2.7/site-packages/vdsm/hugepages.py", line 291, in _cmdline_hugepagesz_to_kb
}[cmdline]
KeyError: '1048576'
2021-04-03 04:40:35,531+0000 INFO (jsonrpc/1) [api.virt] FINISH destroy return={'status': {'message': 'General Exception: ("\'1048576\'",)', 'code': 100}} from=::ffff:10.100.0.210,58150, vmId=58abf0cf-d7b9-4067-a86a-e619928368e7 (api:54)
2021-04-03 04:40:35,532+0000 INFO (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC call VM.destroy failed (error 100) in 0.02 seconds (__init__:312)
4 years
[ANN] oVirt 4.4.6 Third Release Candidate is now available for testing
by Lev Veyde
oVirt 4.4.6 Third Release Candidate is now available for testing
The oVirt Project is pleased to announce the availability of oVirt 4.4.6
Third Release Candidate for testing, as of April 8th, 2021.
This update is the sixth in a series of stabilization updates to the 4.4
series.
How to prevent hosts entering emergency mode after upgrade from oVirt 4.4.1
Note: Upgrading from 4.4.2 GA or later should not require re-doing these
steps, if already performed while upgrading from 4.4.1 to 4.4.2 GA. These
are only required to be done once.
Due to Bug 1837864 <https://bugzilla.redhat.com/show_bug.cgi?id=1837864> -
Host enter emergency mode after upgrading to latest build
If you have your root file system on a multipath device on your hosts you
should be aware that after upgrading from 4.4.1 to 4.4.6 you may get your
host entering emergency mode.
In order to prevent this be sure to upgrade oVirt Engine first, then on
your hosts:
1.
Remove the current lvm filter while still on 4.4.1, or in emergency mode
(if rebooted).
2.
Reboot.
3.
Upgrade to 4.4.6 (redeploy in case of already being on 4.4.6).
4.
Run vdsm-tool config-lvm-filter to confirm there is a new filter in
place.
5.
Only if not using oVirt Node:
- run "dracut --force --add multipath” to rebuild initramfs with the
correct filter configuration
6.
Reboot.
Documentation
-
If you want to try oVirt as quickly as possible, follow the instructions
on the Download <https://ovirt.org/download/> page.
-
For complete installation, administration, and usage instructions, see
the oVirt Documentation <https://ovirt.org/documentation/>.
-
For upgrading from a previous version, see the oVirt Upgrade Guide
<https://ovirt.org/documentation/upgrade_guide/>.
-
For a general overview of oVirt, see About oVirt
<https://ovirt.org/community/about.html>.
Important notes before you try it
Please note this is a pre-release build.
The oVirt Project makes no guarantees as to its suitability or usefulness.
This pre-release must not be used in production.
Installation instructions
For installation instructions and additional information please refer to:
https://ovirt.org/documentation/
This release is available now on x86_64 architecture for:
* Red Hat Enterprise Linux 8.3 or newer
* CentOS Linux (or similar) 8.3 or newer
This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
for:
* Red Hat Enterprise Linux 8.3 or newer
* CentOS Linux (or similar) 8.3 or newer
* oVirt Node 4.4 based on CentOS Linux 8.3 (available for x86_64 only)
See the release notes [1] for installation instructions and a list of new
features and bugs fixed.
Notes:
- oVirt Appliance is already available for CentOS Linux 8
- oVirt Node NG is already available for CentOS Linux 8
- We found a few issues while testing on CentOS Stream so we are still
basing oVirt 4.4.6 Node and Appliance on CentOS Linux.
Additional Resources:
* Read more about the oVirt 4.4.6 release highlights:
http://www.ovirt.org/release/4.4.6/
* Get more oVirt project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/
[1] http://www.ovirt.org/release/4.4.6/
[2] http://resources.ovirt.org/pub/ovirt-4.4-pre/iso/
--
Lev Veyde
Senior Software Engineer, RHCE | RHCVA | MCITP
Red Hat Israel
<https://www.redhat.com>
lev(a)redhat.com | lveyde(a)redhat.com
<https://red.ht/sig>
TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
4 years
Q: Sparsify/fstrim don't reduce actual disk image size
by Andrei Verovski
Hi !
I have VM (under oVirt) with single disk thin provision (~600 GB) running NextCloud on Debian 9.
Right now VM HD is almost empty. Unfortunately, it occupies 584 GB (virtual size = 600 GB)
All partition (except swap and boot) are EXT4 with discard option. in oVirt “enable discard = on”.
# fstrim -av runs successfully:
/var: 477.6 GiB (512851144704 bytes) trimmed on /dev/mapper/vg--system-lv4--data
/boot: 853.8 MiB (895229952 bytes) trimmed on /dev/mapper/vg--system-lv2--boot
/: 88.4 GiB (94888611840 bytes) trimmed on /dev/mapper/vg--system-lv3—sys
When fstrim runs again, it trims zero. I even run “Sparsify” in oVirt. Unfortunately, actual size is still 584 GB.
Here is /etc/fstab
/dev/mapper/vg--system-lv3--sys / ext4 discard,noatime,nodiratime,errors=remount-ro 0 1
/dev/mapper/vg--system-lv2--boot /boot ext2 defaults 0 2
/dev/mapper/vg--system-lv4--data /var ext4 discard,noatime,nodiratime 0 2
/dev/mapper/vg--system-lv1--swap none swap sw 0 0
When disk was partitioned/formatted, swap and boot were created first and positioned at the beginning.
What is wrong here? Is it possible to fix all this ?
Thanks in advance.
Andrei
4 years