Be it known that I no longer work for NAVOCEANO.
I have transferred to the Air Force.
If you need assistance please submit Ripken Ticket or call the Help Desk at x5122.
V/r
Sam
On Thu, 2020-05-07 at 13:07 +0000, Anton Louw via Users wrote:
>
>
>
> Hi All,
>
>
>
> One of my nodes went into a unresponsive state, but the VMs running on that
> host are still up. I just want to know, can I restart VDSM on that node, or
> will it impact the running VMs? In another article, somebody restarted
> the engine, and that resolved their issue. I would like to first test the
> VDSM and if that does not work, I will restart the engine.
>
Hi Anton,Would it be possible to post /var/log/vdsm.log from that affected
host + relevant engine.log? I am currently investigating engine-to-host
connectivity issue that may or may not be related [1]. What are the exact
versions of engine and vdsm packages? ( dnf list --installed | egrep
"ovirt|vdsm|jsonrpc" )
[1] https://bugzilla.redhat.com/show_bug.cgi?id=1828669
thanks!Artur
>
> Thanks
>
>
>
>
>
>
>
> Anton Louw
>
>
> Cloud Engineer: Storage and Virtualization at Vox
>
>
>
>
>
>
> T: 087 805 0000 | D: 087 805 1572
> M: N/A
>
> E: anton.louw(a)voxtelecom.co.za
> A: Rutherford Estate, 1 Scott Street, Waverley, Johannesburg
>
> www.vox.co.za
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> Disclaimer
> The contents of this email are confidential to the sender and the intended
> recipient. Unless the contents are clearly and entirely of a personal nature,
> they are subject to copyright in favour of the holding company of the Vox
> group of companies. Any recipient who receives this email in error should
> immediately report the error to the sender and permanently delete this email
> from all storage devices.
>
> This email has been scanned for viruses and malware, and may have been
> automatically archived by Mimecast Ltd, an innovator in Software as a Service
> (SaaS) for business. Providing a safer and more useful place for your human
> generated data. Specializing in; Security, archiving and compliance. To find
> out more Click Here.
>
>
>
>
>
>
>
>
>
> _______________________________________________Users mailing list --
> users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/2SDQ32OZPQ7...
Hi All,
One of my nodes went into a unresponsive state, but the VMs running on that host are still up. I just want to know, can I restart VDSM on that node, or will it impact the running VMs? In another article, somebody restarted the engine, and that resolved their issue. I would like to first test the VDSM and if that does not work, I will restart the engine.
Thanks
Anton Louw
Cloud Engineer: Storage and Virtualization
______________________________________
D: 087 805 1572 | M: N/A
A: Rutherford Estate, 1 Scott Street, Waverley, Johannesburg
anton.louw(a)voxtelecom.co.za
www.vox.co.za
I'm using oVirt 4.3 (latest ) and able to successfully provision Centos VMs without any problems.
When I attempt to provision Ubuntu VMs, they hang at startup.
The console shows :
...
...
[ 4.010016] Btrfs loaded
[ 101.268594] random: nonblocking pool is initialized
It stays like this indefinitely.
Again, I have no problems with Centos images, but need Ubuntu
Any tips greatly appreciated.
The oVirt Project is pleased to announce the availability of oVirt 4.3.10
Third Release Candidate for testing as of May 6th, 2020.
This update is the tenth in a series of stabilization updates to the 4.3
series.
This release is available now on x86_64 architecture for:
* Red Hat Enterprise Linux 7.7 or later (but < 8)
* CentOS Linux (or similar) 7.7 or later (but < 8)
This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
for:
* Red Hat Enterprise Linux 7.7 or later (but < 8)
* CentOS Linux (or similar) 7.7 or later (but < 8)
* oVirt Node 4.3 (available for x86_64 only)
See the release notes [1] for installation / upgrade instructions and a
list of new features and bugs fixed.
Notes:
- oVirt Appliance is already available
- oVirt Node is already available[2]
Additional Resources:
* Read more about the oVirt 4.3.10 release highlights:
http://www.ovirt.org/release/4.3.10/
* Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/
[1] http://www.ovirt.org/release/4.3.10/
[2] http://resources.ovirt.org/pub/ovirt-4.3-pre/iso/
--
Lev Veyde
Senior Software Engineer, RHCE | RHCVA | MCITP
Red Hat Israel
<https://www.redhat.com>
lev(a)redhat.com | lveyde(a)redhat.com
<https://red.ht/sig>
TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
The oVirt Project is excited to announce the availability of the first
Release Candidate of oVirt 4.4.0 for testing, as of May 6th, 2020
This release unleashes an altogether more powerful and flexible open source
virtualization solution that encompasses hundreds of individual changes and
a wide range of enhancements across the engine, storage, network, user
interface, and analytics on top of oVirt 4.3.
Important notes before you try it
Please note this is a pre-release build.
The oVirt Project makes no guarantees as to its suitability or usefulness.
This pre-release must not to be used in production.
Some of the features included in oVirt 4.4.0 Release Candidate require
content that will be available in CentOS Linux 8.2 but can’t be tested on
RHEL 8.2 beta yet due to some incompatibility in openvswitch package
shipped in CentOS Virt SIG which requires to rebuild openvswitch on top of
CentOS 8.2.
Installation instructions
For the engine: either use appliance or:
- Install CentOS Linux 8 minimal from
http://centos.mirror.garr.it/centos/8.1.1911/isos/x86_64/CentOS-8.1.1911-...
- dnf install
https://resources.ovirt.org/pub/yum-repo/ovirt-release44-pre.rpm
- dnf update (reboot if needed)
- dnf module enable -y javapackages-tools pki-deps postgresql:12
- dnf install ovirt-engine
- engine-setup
For the nodes:
Either use oVirt Node ISO or:
- Install CentOS Linux 8 from
http://centos.mirror.garr.it/centos/8.1.1911/isos/x86_64/CentOS-8.1.1911-...
; select minimal installation
- dnf install
https://resources.ovirt.org/pub/yum-repo/ovirt-release44-pre.rpm
- dnf update (reboot if needed)
- Attach the host to engine and let it be deployed.
What’s new in oVirt 4.4.0 Release Candidate?
-
Hypervisors based on CentOS Linux 8 (rebuilt from award winning RHEL8),
for both oVirt Node and standalone CentOS Linux hosts
-
Easier network management and configuration flexibility with
NetworkManager
-
VMs based on a more modern Q35 chipset with legacy seabios and UEFI
firmware
-
Support for direct passthrough of local host disks to VMs
-
Live migration improvements for High Performance guests.
-
New Windows Guest tools installer based on WiX framework now moved to
VirtioWin project
-
Dropped support for cluster level prior to 4.2
-
Dropped SDK3 support
-
4K disks support only for file based storage. iSCSI/FC storage do not
support 4k disks yet.
-
Exporting a VM to a data domain
-
Editing of floating disks
-
Integrating ansible-runner into engine, which allows a more detailed
monitoring of playbooks executed from engine
-
Adding/reinstalling hosts are now completely based on Ansible
-
The OpenStack Neutron Agent cannot be configured by oVirt anymore, it
should be configured by TripleO instead
This release is available now on x86_64 architecture for:
* Red Hat Enterprise Linux 8.1
* CentOS Linux (or similar) 8.1
This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
for:
* Red Hat Enterprise Linux 8.1
* CentOS Linux (or similar) 8.1
* oVirt Node 4.4 based on CentOS Linux 8.1 (available for x86_64 only)
See the release notes [1] for installation instructions and a list of new
features and bugs fixed.
If you manage more than one oVirt instance, OKD or RDO we also recommend to
try ManageIQ <http://manageiq.org/>.
In such a case, please be sure to take the qc2 image and not the ova image.
Notes:
- oVirt Appliance is already available for CentOS Linux 8
- oVirt Node NG is already available for CentOS Linux 8
Additional Resources:
* Read more about the oVirt 4.4.0 release highlights:
http://www.ovirt.org/release/4.4.0/
* Get more oVirt project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/
[1] http://www.ovirt.org/release/4.4.0/
[2] http://resources.ovirt.org/pub/ovirt-4.4-pre/iso/
--
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://www.redhat.com/>
[image: |Our code is open_] <https://www.redhat.com/en/our-code-is-open>
*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.*
Forwarding to oVirt users list since it looks to be better suited there.
---------- Forwarded message ---------
From: kelley bryan <kelley.bryan10(a)gmail.com>
Date: Wed, May 6, 2020 at 12:02 PM
Subject: Install of new ovirt baremetal system 4.3.9
To: <infra(a)ovirt.org>
Engine deployment fails near end:
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": true, "cmd": "set -euo
pipefail && firewall-cmd --get-active-zones | grep -v \"^\\s*interfaces\"",
"delta": "0:00:00.352904", "end": "2020-05-05 22:28:01.561606", "msg":
"non-zero return code", "rc": 1, "start": "2020-05-05 22:28:01.208702",
"stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
The system may not be provisioned according to the playbook results: please
check the logs for the issue, fix accordingly or re-deploy from scratch.\n"}
were does ovirt store logs?
_______________________________________________
Infra mailing list -- infra(a)ovirt.org
To unsubscribe send an email to infra-leave(a)ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/XALRUKVRYFC...
--
Anton Marchukov
Associate Manager - RHV DevOps - Red Hat
Forwarding to oVirt users list.
---------- Forwarded message ---------
From: <srivathsa.puliyala(a)dunami.com>
Date: Wed, May 6, 2020 at 12:01 PM
Subject: Ovirt host GetGlusterVolumeHealInfoVDS failed events
To: <infra(a)ovirt.org>
Hi,
We have a oVirt cluster with 4 hosts and hosted engine running on one of
them (all the nodes provide the storage with GlusterFS)
Currently there are 53 VMs running.
The version of the oVirt-Engine is 4.2.8.2-1.el7 and GlusterFS is 3.12.15.
From past 1 week, we seem to have multiple events popping up on Ovirt-UI
about the GetGlusterVolumeHealInfoVDS from all the nodes randomly like one
ERROR event for every ~13minutes.
Sample Event dashboard example:
May 4, 2020, 2:32:14 PM - Status of host <host-1> was set to Up.
May 4, 2020, 2:32:11 PM - Manually synced the storage devices from host
<host-1>
May 4, 2020, 2:31:55 PM - Host <host-1> is not responding. Host cannot be
fenced automatically because power management for the host is disabled.
May 4, 2020, 2:31:55 PM - VDSM <host-1> command GetGlusterVolumeHealInfoVDS
failed: Message timeout which can be caused by communication issues
May 4, 2020, 2:19:14 PM - Status of host <host-2> was set to Up.
May 4, 2020, 2:19:12 PM - Manually synced the storage devices from host
<host-2>
May 4, 2020, 2:18:49 PM - Host <host-2> is not responding. Host cannot be
fenced automatically because power management for the host is disabled.
May 4, 2020, 2:18:49 PM - VDSM <host-2> command GetGlusterVolumeHealInfoVDS
failed: Message timeout which can be caused by communication issues
May 4, 2020, 2:05:55 PM - Status of host <host-2> was set to Up.
May 4, 2020, 2:05:54 PM - Manually synced the storage devices from host
<host-2>
May 4, 2020, 2:05:35 PM - Host <host-2> is not responding. Host cannot be
fenced automatically because power management for the host is disabled.
May 4, 2020, 2:05:35 PM - VDSM <host-2> command GetGlusterVolumeHealInfoVDS
failed: Message timeout which can be caused by communication issues
May 4, 2020, 1:52:45 PM - Status of host <host-3> was set to Up.
May 4, 2020, 1:52:44 PM - Manually synced the storage devices from host
<host-3>
May 4, 2020, 1:52:22 PM - Host <host-3> is not responding. Host cannot be
fenced automatically because power management for the host is disabled.
May 4, 2020, 1:52:22 PM - VDSM <host-3> command GetGlusterVolumeHealInfoVDS
failed: Message timeout which can be caused by communication issues
May 4, 2020, 1:39:11 PM - Status of host <host-4> was set to Up.
May 4, 2020, 1:39:11 PM - Manually synced the storage devices from host
<host-4>
May 4, 2020, 1:39:11 PM - Host <host-4> is not responding. Host cannot be
fenced automatically because power management for the host is disabled.
May 4, 2020, 1:39:11 PM - VDSM <host-4> command GetGlusterVolumeHealInfoVDS
failed: Message timeout which can be caused by communication issues
May 4, 2020, 1:26:29 PM - Status of host <host-3> was set to Up.
May 4, 2020, 1:26:28 PM - Manually synced the storage devices from host
<host-3>
May 4, 2020, 1:26:11 PM - Host <host-3> is not responding. Host cannot be
fenced automatically because power management for the host is disabled.
May 4, 2020, 1:26:11 PM - VDSM <host-3> command GetGlusterVolumeHealInfoVDS
failed: Message timeout which can be caused by communication issues
May 4, 2020, 1:13:10 PM - Status of host <host-1> was set to Up.
May 4, 2020, 1:13:08 PM - Manually synced the storage devices from host
<host-1>
May 4, 2020, 1:12:51 PM - Host <host-1> is not responding. Host cannot be
fenced automatically because power management for the host is disabled.
May 4, 2020, 1:12:51 PM - VDSM <host-1> command GetGlusterVolumeHealInfoVDS
failed: Message timeout which can be caused by communication issues
and so on.....
When I look at the Compute > Hosts dashboard, I see the host status to be
DOWN when VDSM event (GetGlusterVolumeHealInfoVDS failed) is popped and
automatically the host status is set to UP within no time.
FYI: when host status is DOWN, the VM's running on that host are not
migrating and everything is running perfectly fine.
This is happening all day. Is there something I can troubleshoot?
Appreciate your comments.
_______________________________________________
Infra mailing list -- infra(a)ovirt.org
To unsubscribe send an email to infra-leave(a)ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/GNE3QC7GLEE...
--
Anton Marchukov
Associate Manager - RHV DevOps - Red Hat