/var/log has less then 500MB free space left on
by Andrei Verovski
Hi,
I started to get messages that /var/log has less then 500MB free space left on out of 8GB.
du -h —max-depthh=1
reveals that openswitch log dir consumes 3.6GB, there are files as old as 2018.
Node software was installed on fresh CentOS 7.xx from oVirt admin web console.
I cleaned this directory, yet question remain - why logrotate service is not running by default on oVirt node?
Or this is not recommended?
Thanks in advance.
Andrei
3 years, 5 months
hosted-engine --vm-start not working
by Harry O
Hi,
When i run: hosted-engine --vm-start I get this:
VM exists and is Down, cleaning up and restarting
VM in WaitForLaunch
But the VM never starts:
virsh list --all
Id Name State
-------------------------------
- HostedEngine shut off
systemctl status -l ovirt-ha-agent
● ovirt-ha-agent.service - oVirt Hosted Engine High Availability Monitoring Agent
Loaded: loaded (/usr/lib/systemd/system/ovirt-ha-agent.service; disabled; vendor preset: disabled)
Active: active (running) since Wed 2021-06-16 13:27:27 CEST; 3min 26s ago
Main PID: 79702 (ovirt-ha-agent)
Tasks: 2 (limit: 198090)
Memory: 28.3M
CGroup: /system.slice/ovirt-ha-agent.service
└─79702 /usr/libexec/platform-python /usr/share/ovirt-hosted-engine-ha/ovirt-ha-agent
Jun 16 13:27:27 hej1.5ervers.lan systemd[1]: ovirt-ha-agent.service: Succeeded.
Jun 16 13:27:27 hej1.5ervers.lan systemd[1]: Stopped oVirt Hosted Engine High Availability Monitoring Agent.
Jun 16 13:27:27 hej1.5ervers.lan systemd[1]: Started oVirt Hosted Engine High Availability Monitoring Agent.
Jun 16 13:29:42 hej1.5ervers.lan ovirt-ha-agent[79702]: ovirt-ha-agent ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine ERROR Engine VM stopped on localhost
hosted-engine --vm-status
--== Host hej1.5ervers.lan (id: 1) status ==--
Host ID : 1
Host timestamp : 3547
Score : 3400
Engine status : {"vm": "down", "health": "bad", "detail": "Down", "reason": "bad vm status"}
Hostname : hej1.5ervers.lan
Local maintenance : False
stopped : False
crc32 : f35899f8
conf_on_shared_storage : True
local_conf_timestamp : 3547
Status up-to-date : True
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=3547 (Wed Jun 16 13:32:12 2021)
host-id=1
score=3400
vm_conf_refresh_time=3547 (Wed Jun 16 13:32:12 2021)
conf_on_shared_storage=True
maintenance=False
state=EngineDown
stopped=False
--== Host hej2.5ervers.lan (id: 2) status ==--
Host ID : 2
Host timestamp : 94681
Score : 0
Engine status : {"vm": "down_unexpected", "health": "bad", "detail": "Down", "reason": "bad vm status"}
Hostname : hej2.5ervers.lan
Local maintenance : False
stopped : False
crc32 : 40a3f809
conf_on_shared_storage : True
local_conf_timestamp : 94681
Status up-to-date : True
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=94681 (Wed Jun 16 13:32:05 2021)
host-id=2
score=0
vm_conf_refresh_time=94681 (Wed Jun 16 13:32:05 2021)
conf_on_shared_storage=True
maintenance=False
state=EngineUnexpectedlyDown
stopped=False
timeout=Fri Jan 2 03:23:40 1970
--== Host hej3.5ervers.lan (id: 3) status ==--
Host ID : 3
Host timestamp : 94666
Score : 0
Engine status : {"vm": "down_unexpected", "health": "bad", "detail": "Down", "reason": "bad vm status"}
Hostname : hej3.5ervers.lan
Local maintenance : False
stopped : False
crc32 : a50c2b3e
conf_on_shared_storage : True
local_conf_timestamp : 94666
Status up-to-date : True
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=94666 (Wed Jun 16 13:32:09 2021)
host-id=3
score=0
vm_conf_refresh_time=94666 (Wed Jun 16 13:32:09 2021)
conf_on_shared_storage=True
maintenance=False
state=EngineUnexpectedlyDown
stopped=False
timeout=Fri Jan 2 03:23:16 1970
3 years, 5 months
Customization of Ovirt VM Portal
by Alessio B.
Hello to all,
please is there a way to customize the VM portal?
In my case I need to hide the "VNC console" value from the dropdown list inside of selected VM properties and leave only "VNC Console Browser".
The user profile is "UserRole"
Thank you very much!
3 years, 5 months
RFC8482 and engine-setup
by Strahil Nikolov
Hello All,
recently I have changed my firewall and DNS server and I have noticed that engine-setup warns with:
Failed to resolve engine.localdomain using DNS, it can be resolved only locally.
The command used was: 'dig +noall +answer FQDN ANY'
It seems that according to RFC8482, ANY is deprecated and DNS can return anything.Once such example is the output of 'dig wikipedia.org ANY' .
What are our options now ? I guess we can query for A and AAAA records ?
Extra info: https://blog.cloudflare.com/rfc8482-saying-goodbye-to-any/
Best Regards,Strahil Nikolov
3 years, 5 months
[ANN] oVirt 4.4.7 Third Release Candidate is now available for testing
by Sandro Bonazzola
oVirt 4.4.7 Third Release Candidate is now available for testing
The oVirt Project is pleased to announce the availability of oVirt 4.4.7
Third Release Candidate for testing, as of June 10th, 2021.
This update is the seventh in a series of stabilization updates to the 4.4
series.
Documentation
-
If you want to try oVirt as quickly as possible, follow the instructions
on the Download <https://ovirt.org/download/> page.
-
For complete installation, administration, and usage instructions, see
the oVirt Documentation <https://ovirt.org/documentation/>.
-
For upgrading from a previous version, see the oVirt Upgrade Guide
<https://ovirt.org/documentation/upgrade_guide/>.
-
For a general overview of oVirt, see About oVirt
<https://ovirt.org/community/about.html>.
Important notes before you try it
Please note this is a pre-release build.
The oVirt Project makes no guarantees as to its suitability or usefulness.
This pre-release must not be used in production.
If you are upgrading from 4.4.1 please check previous versions release
notes about Bug 1837864
<https://bugzilla.redhat.com/show_bug.cgi?id=1837864>
Installation instructions
For installation instructions and additional information please refer to:
https://ovirt.org/documentation/
This release is available now on x86_64 architecture for:
* Red Hat Enterprise Linux 8.4 or similar
* CentOS Stream 8
This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
for:
* Red Hat Enterprise Linux 8.4 or similar
* CentOS Stream 8
* oVirt Node 4.4 based on CentOS Stream 8 (available for x86_64 only)
See the release notes [1] for installation instructions and a list of new
features and bugs fixed.
Notes:
- oVirt Appliance is already available based on CentOS Stream 8
- oVirt Node NG is already available based on CentOS Stream 8
Additional Resources:
* Read more about the oVirt 4.4.7 release highlights:
http://www.ovirt.org/release/4.4.7/
* Get more oVirt project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/
[1] http://www.ovirt.org/release/4.4.7/
[2] http://resources.ovirt.org/pub/ovirt-4.4-pre/iso/
--
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://www.redhat.com/>
*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.*
3 years, 5 months
Zombie VM on node
by Ilya Fedotov
Good day, colleagues
Please help and name the UPDATE command SQL to remove information from the
node about the number of virtual machines. Currently, no virtual machines
are running on the node, but in statistics shows 1 thing. See the pictures
in the clip.
Thank you
with br, Ilya F
3 years, 5 months
fresh ovirt node 4.4.6 fail on firewalld both host and engine deployment
by Charles Kozler
Hello -
Deployed fresh ovirt node 4.4.6 and the only thing I did to the system was
configure the NIC with nmtui
During the gluster install the install errored out with
gluster-deployment-1620832547044.log:failed: [n2] (item=5900/tcp) =>
{"ansible_loop_var": "item", "changed": false, "item": "5900/tcp", "msg":
"ERROR: Exception caught: org.fedoraproject.FirewallD1.Exception:
ALREADY_ENABLED: '5900:tcp' already in 'public' Permanent and
Non-Permanent(immediate) operation"}
The fix here was easy - I just deleted the port it was complaining about
with firewall-cmd and restarted the installation and it was all fine
During the hosted engine deployment when the VM is being deployed it dies
here
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Open a port on firewalld]
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "ERROR:
Exception caught: org.fedoraproject.FirewallD1.Exception: ALREADY_ENABLED:
'6900:tcp' already in 'public' Non-permanent operation"}
Now the issue here is that I do not have access to the engine VM as it is
in a bit of a transient state since when it fails the current image that is
open is discarded when the ansible playbook is kicked off again
I cannot find any BZ on this and google is turning up nothing. I don't
think firewalld failing due to the firewall rule already existing should be
a reason to exit the installation
The interesting part is that this only fails on certain ports. i.e when I
reran the gluster wizard after 5900 failed, the other ports are presumably
still added to the firewall, and the installation completes
Suggestions?
--
*Notice to Recipient*: https://www.fixflyer.com/disclaimer
<https://www.fixflyer.com/disclaimer>
3 years, 5 months
Update of plain CentOS hosts very slow
by Gianluca Cecchi
Hello,
I have a 4.4.5 environment that I'm upgrading to 4.4.6.
I'm upgrading plain CentOS hosts from the GUI.
They are in 4.4.5, so in particular CentOS 8.3 and as part of the upgrade
they have to be put to 8.4.
In the past I used "yum update" on the host but now it seems it is not the
correct way.
But the ansible part related to package updates seems to be very slow.
It gives the impression that it is doing it one by one and not as a whole
when you run "yum update"
Now it is about 30 minutes that the update is going on and my internet
speed is for sure very high.
In messages of host I see every single line suche this ones:
Jun 8 11:09:30 ov300 python3[3031815]: ansible-dnf Invoked with
name=['rsyslog-relp.x86_64'] state=latest lock_timeout=300
conf_file=/tmp/yum.conf allow_downgrade=False autoremove=False bugfix=False
disable_gpg_check=False disable_plugin=[] disablerepo=[]
download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/
install_repoquery=True install_weak_deps=True security=False
skip_broken=False update_cache=False update_only=False validate_certs=True
disable_excludes=None download_dir=None list=None releasever=None
Jun 8 11:09:32 ov300 python3[3031828]: ansible-dnf Invoked with
name=['runc.x86_64'] state=latest lock_timeout=300 conf_file=/tmp/yum.conf
allow_downgrade=False autoremove=False bugfix=False disable_gpg_check=False
disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[]
enablerepo=[] exclude=[] installroot=/ install_repoquery=True
install_weak_deps=True security=False skip_broken=False update_cache=False
update_only=False validate_certs=True disable_excludes=None
download_dir=None list=None releasever=None
Any clarification?
Thanks,
Gianluca
3 years, 5 months