poweroff and reboot with ovirt_vm ansible module
by Nathanaël Blanchet
Hello, is there a way to poweroff or reboot (without stopped and running
state) a vm with the ovirt_vm ansible module?
--
Nathanaël Blanchet
Supervision réseau
Pôle Infrastrutures Informatiques
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5
Tél. 33 (0)4 67 54 84 55
Fax 33 (0)4 67 54 84 14
blanchet(a)abes.fr
3 years, 10 months
[ANN] Async release for oVirt 4.4.6
by Lev Veyde
oVirt 4.4.6 Async update #3
On May 18th 2021 the oVirt project released an async update to the
following packages:
-
Vdsm 4.40.60.7
-
oVirt Node 4.4.6.3
Fixing the following bugs:
-
Bug 1959945 <https://bugzilla.redhat.com/show_bug.cgi?id=1959945> -
[NBDE] RHVH 4.4.6 host fails to startup, without prompting for passphrase
-
Bug 1955571 <https://bugzilla.redhat.com/show_bug.cgi?id=1955571> -
Verify if we still need to omit ifcfg and clevis dracut modules for
properly working bridged network
-
Bug 1950209 <https://bugzilla.redhat.com/show_bug.cgi?id=1950209> - Leaf
images used by the VM is deleted by the engine during snapshot merge
oVirt Node Changes:
- Consume above oVirt updates
- Updated to Gluster 8.5
<https://docs.gluster.org/en/latest/release-notes/8.5/>
Full diff list:
--- ovirt-node-ng-image-4.4.6.2.manifest-rpm 2021-05-14 08:58:12.581488678
+0200
+++ ovirt-node-ng-image-4.4.6.3.manifest-rpm 2021-05-18 13:09:07.858527812
+0200
@@ -220,7 +220,7 @@
-glusterfs-8.4-1.el8.x86_64
-glusterfs-cli-8.4-1.el8.x86_64
-glusterfs-client-xlators-8.4-1.el8.x86_64
-glusterfs-events-8.4-1.el8.x86_64
-glusterfs-fuse-8.4-1.el8.x86_64
-glusterfs-geo-replication-8.4-1.el8.x86_64
-glusterfs-server-8.4-1.el8.x86_64
+glusterfs-8.5-1.el8.x86_64
+glusterfs-cli-8.5-1.el8.x86_64
+glusterfs-client-xlators-8.5-1.el8.x86_64
+glusterfs-events-8.5-1.el8.x86_64
+glusterfs-fuse-8.5-1.el8.x86_64
+glusterfs-geo-replication-8.5-1.el8.x86_64
+glusterfs-server-8.5-1.el8.x86_64
@@ -383,6 +383,6 @@
-libgfapi0-8.4-1.el8.x86_64
-libgfchangelog0-8.4-1.el8.x86_64
-libgfrpc0-8.4-1.el8.x86_64
-libgfxdr0-8.4-1.el8.x86_64
-libglusterd0-8.4-1.el8.x86_64
-libglusterfs0-8.4-1.el8.x86_64
+libgfapi0-8.5-1.el8.x86_64
+libgfchangelog0-8.5-1.el8.x86_64
+libgfrpc0-8.5-1.el8.x86_64
+libgfxdr0-8.5-1.el8.x86_64
+libglusterd0-8.5-1.el8.x86_64
+libglusterfs0-8.5-1.el8.x86_64
@@ -643 +643 @@
-ovirt-node-ng-image-update-placeholder-4.4.6.2-1.el8.noarch
+ovirt-node-ng-image-update-placeholder-4.4.6.3-1.el8.noarch
@@ -651,2 +651,2 @@
-ovirt-release-host-node-4.4.6.2-1.el8.noarch
-ovirt-release44-4.4.6.2-1.el8.noarch
+ovirt-release-host-node-4.4.6.3-1.el8.noarch
+ovirt-release44-4.4.6.3-1.el8.noarch
@@ -754 +754 @@
-python3-gluster-8.4-1.el8.x86_64
+python3-gluster-8.5-1.el8.x86_64
@@ -940,15 +940,15 @@
-vdsm-4.40.60.6-1.el8.x86_64
-vdsm-api-4.40.60.6-1.el8.noarch
-vdsm-client-4.40.60.6-1.el8.noarch
-vdsm-common-4.40.60.6-1.el8.noarch
-vdsm-gluster-4.40.60.6-1.el8.x86_64
-vdsm-hook-ethtool-options-4.40.60.6-1.el8.noarch
-vdsm-hook-fcoe-4.40.60.6-1.el8.noarch
-vdsm-hook-openstacknet-4.40.60.6-1.el8.noarch
-vdsm-hook-vhostmd-4.40.60.6-1.el8.noarch
-vdsm-hook-vmfex-dev-4.40.60.6-1.el8.noarch
-vdsm-http-4.40.60.6-1.el8.noarch
-vdsm-jsonrpc-4.40.60.6-1.el8.noarch
-vdsm-network-4.40.60.6-1.el8.x86_64
-vdsm-python-4.40.60.6-1.el8.noarch
-vdsm-yajsonrpc-4.40.60.6-1.el8.noarch
+vdsm-4.40.60.7-1.el8.x86_64
+vdsm-api-4.40.60.7-1.el8.noarch
+vdsm-client-4.40.60.7-1.el8.noarch
+vdsm-common-4.40.60.7-1.el8.noarch
+vdsm-gluster-4.40.60.7-1.el8.x86_64
+vdsm-hook-ethtool-options-4.40.60.7-1.el8.noarch
+vdsm-hook-fcoe-4.40.60.7-1.el8.noarch
+vdsm-hook-openstacknet-4.40.60.7-1.el8.noarch
+vdsm-hook-vhostmd-4.40.60.7-1.el8.noarch
+vdsm-hook-vmfex-dev-4.40.60.7-1.el8.noarch
+vdsm-http-4.40.60.7-1.el8.noarch
+vdsm-jsonrpc-4.40.60.7-1.el8.noarch
+vdsm-network-4.40.60.7-1.el8.x86_64
+vdsm-python-4.40.60.7-1.el8.noarch
+vdsm-yajsonrpc-4.40.60.7-1.el8.noarch
--
Lev Veyde
Senior Software Engineer, RHCE | RHCVA | MCITP
Red Hat Israel
<https://www.redhat.com>
lev(a)redhat.com | lveyde(a)redhat.com
<https://red.ht/sig>
TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
3 years, 10 months
Ovirt Engine -- Connection Refused to all hosts
by Nick Polites
Hi All,
I am not sure if my original post is being reviewed before posting but trying again in case it failed to send.
I tried logging in this morning to oVrit and see that all of my hosts are unresponsive. I am seeing a connection refused error in the engine logs. I am able to SSH and ping the host from the engine. Any help would be appreciated.
2021-05-15 15:19:21,041Z ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesAsyncVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-65) [] Command 'GetCapabilitiesAsyn
cVDSCommand(HostName = hlkvm03, VdsIdAndVdsVDSCommandParametersBase:{hostId='2186eca7-4d9d-482f-b1b7-b63ac46b96aa', vds='Host[hlkvm03,2186eca7-4d9d-482f-b1b7
-b63ac46b96aa]'})' execution failed: java.net.ConnectException: Connection refused
~
3 years, 10 months
Migrating VMs with templates from 4.2.8 to new 4.4.6 instance
by Pavel Strzinek
Hello,
I am having hard time migrating vms based on templates from existing 4.2.8 instance to newly installed 4.4.6 HCI with different storage. When I export VMs and corresponding templates to export NFS storage, detach it from source DC and attach the export storage to a new oVirt environment, the import of templates is failing with error "Failed to import Template XXX to Data Center YYY, Cluster ZZZ". I cannot find anything more specific about the error in logs. Am I missing something?
Exporting to OVA and importing back does work, but I want to make use of template thin provisioning.
3 years, 10 months
oVirt 2021 Spring survey
by Sandro Bonazzola
As we continue to develop oVirt 4.4, the Development and Integration teams
at Red Hat would value insights on how you are deploying the oVirt
environment.
Please help us to hit the mark by completing this short survey. Survey will
close on *May 30th 2021*.
If you're managing multiple oVirt deployments with very different use cases
or very different deployments you can consider answering this survey
multiple times.
*Please note the answers to this survey will be publicly accessible*.
This survey is under oVirt Privacy Policy available at
https://ovirt.org/privacy-policy.html .
The survey form is available at
https://docs.google.com/forms/d/e/1FAIpQLScdJGoBYxuW-4IsIvZGVpbiEWhmt4O-o...
--
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://www.redhat.com/>
*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.*
3 years, 10 months
Create Brick from Engine host view
by Harry O
When I try to create a single disk brick via host view "storage devices" on engine, I get the following error.
Error while executing action Create Brick: Internal Engine Error
Failed to create brick lalaf on host hej1.5ervers.lan of cluster Clu1.
I want the brick to be single disk no raid, no cache. Is there a way to create it via CLI? Do I need to pull some logs?
3 years, 11 months
Power management on Dell PowerEdge R320 and R520
by Pavel Strzinek
Hello,
I am having trouble configuring fencing on these two servers with iDrac7 module from freshly installed oVirt 4.4.6. I tried drac5, drac7 and ipmilan modules and neither passes the test. I used ipmilan with option "lanplus=1", as noted in previous threads about idrac usage with oVirt/RHEV, but with no success. Also, I am successfully using ipmilan fencing from older oVirt 4.2 on several SuperMicro server nodes.
This is the error message in engine.log:
2021-05-14 09:28:41,185+02 ERROR [org.ovirt.engine.core.bll.pm.FenceProxyLocator] (default task-6) [12faf352-da4a-4b24-9ab9-af54559cecd1] Can not run fence action on host 'onode1', no suitable proxy host was found.
I can successfully query the idrac module with ipmitool from command line on the node, using the same credentials.
3 years, 11 months
[ANN] Async release for oVirt 4.4.6
by Sandro Bonazzola
On May 14th 2021 the oVirt project released an async update to the
following packages:
- ovirt-hosted-engine-ha-2.4.7
- ovirt-release44-4.4.6.2
- ovirt-engine-4.4.6.8
- oVirt Node 4.4.6.2
Fixing the following bugs:
- Bug 1909888 <https://bugzilla.redhat.com/show_bug.cgi?id=1909888> - [RFE]
Support multiple IQN in hosted-engine.conf for Active-Active DR setup
- Bug 1957253 <https://bugzilla.redhat.com/show_bug.cgi?id=1957253>
- [cinderlib]
Enable using Managed Block Storage on 4.6 cluster by default
- Bug 1958869 <https://bugzilla.redhat.com/show_bug.cgi?id=1958869> - Import
VM from export domain fails - the imported VM remains in 'image locked'
state
oVirt Node Changes:
- Consume above oVirt updates
- Updated hivex (CVE-2021-3504
<https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-3504>)
Full diff list:
*--- ovirt-node-ng-image-4.4.6.1.manifest-rpm* *2021-05-11
08:39:44.714649170 +0200**+++
ovirt-node-ng-image-4.4.6.2.manifest-rpm* *2021-05-14
08:58:12.581488678 +0200*@@ -253 +253
@@-hivex-1.3.18-20.module_el8.5.0+746+bbd5d70c.x86_64+hivex-1.3.18-21.el8s.x86_64(a)@
-638 +638 @@-ovirt-hosted-engine-ha-2.4.6-1.el8.noarch+ovirt-hosted-engine-ha-2.4.7-1.el8.noarch(a)@
-643 +643 @@-ovirt-node-ng-image-update-placeholder-4.4.6.1-1.el8.noarch+ovirt-node-ng-image-update-placeholder-4.4.6.2-1.el8.noarch(a)@
-651,2 +651,2 @(a)-ovirt-release-host-node-4.4.6.1-1.el8.noarch-ovirt-release44-4.4.6.1-1.el8.noarch+ovirt-release-host-node-4.4.6.2-1.el8.noarch+ovirt-release44-4.4.6.2-1.el8.noarch
--
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://www.redhat.com/>
*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.*
3 years, 11 months
[OLVM] Host non responsive after installation
by alan@softdrive.co
I am using Oracle Linux Virtualization Manager, following this guide: https://docs.oracle.com/en/virtualization/oracle-linux-virtualization-man...
After adding a host to the engine, the host becomes non responsive due to network errors:
engine.log
2021-04-27 14:53:02,255Z ERROR [org.ovirt.engine.core.bll.hostdeploy.InstallVdsInternalCommand] (EE-ManagedThreadFactory-engine-Thread-32356) [38586e0e] Host installation failed for host 'c97604b3-5774-4260-92fd-633257aa7498', 'GPU2-2': Network error during communication with the host
Help resolving this would be much appreciated!
3 years, 11 months