I'm trying to use fence_rhevm in a CentOS 6.8 guest that is part of a
virtual rhcs cluster
My sw version for fence_agents inside guest is
fence-agents-4.0.15-12.el6.x86_64 and I notice that for this particular
agent nothing changes also using the latest available package
[root@p2vnorasvi1 ~]# diff fence_rhevm /usr/sbin/fence_rhevm
< BUILD_DATE="(built Wed Mar 22 04:24:11 UTC 2017)"
> BUILD_DATE="(built Tue May 10 22:28:47 UTC 2016)"
The VM name in oVirt 4.1.1 is p2vorasvi1
Running this command against the engine I get
[root@p2vnorasvi1 network-scripts]# fence_rhevm -a 10.4.192.43 -l
"admin@internal" -p "mypassword" -z --shell-timeout=20 --power-wait=10 -v
-o status -n p2vorasvi1
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<title>404 Not Found</title>
<p>The requested URL /api/vms/ was not found on this server.</p>
Failed: Unable to obtain correct plug status or plug is not available
Actually I get the same error even if I put a wrong password....
What am I missing...?
Do I have to specify DC/cluster if I have more than one, or other
Here is my scenario
I have engine and 2 nodes running on VMware workstation with 5 interfaces
each with these responsibilities
vmtraffic - 192.168.192.X
display - 192.168.193.X
storage - 192.168.194.X
ovirtmanagement - 192.168.196.X
Logical networks have been set up as above and all the networks are
reachable....the only network with a gateway is the vmtraffic
192.168.192.143 with gateway 192.168.192.2
when i go to the nodes via the terminal , it doesnt set the gateway and
theirfore that node when you do ip route ,i get this
[root@node01 ~]# ip route
192.168.192.0/24 dev vmtraffic proto kernel scope link src
192.168.193.0/24 dev ens37 proto kernel scope link src 192.168.193.143
192.168.194.0/24 dev ens34 proto kernel scope link src 192.168.194.143
192.168.195.0/24 dev ens35 proto kernel scope link src 192.168.195.143
192.168.196.0/24 dev ens36 proto kernel scope link src 192.168.196.143
I know the solution is to insert a static route but isnt the interface with
a gateway supposed to be default gateway for any traffic ?
and if i put the routes manually ,wont VDSM overwite it ,say if i upgrade.
I'm trying to configure ovirt-imageio-proxy through engine-setup but
configuration is skipped :
--== CONFIGURATION PREVIEW ==--
Default SAN wipe after delete : False
Firewall manager : firewalld
Update Firewall : True
Configure Image I/O Proxy : False
Configure VMConsole Proxy : True
Configure WebSocket Proxy : True
I have tried to run engine-setup with option
'--reconfigure-optional-components' but this option has no effects.
ovirt-imageio-proxy package is installed on this engine host :
# yum list ovirt-imageio-proxy
What can I do to force engine-setup to configure ovirt-imageio-proxy ?
I have a test machine that is a nuc6 with an i5 and 32G of ram and SSD
It is configured as a single host environment with Self Hosted Engine VM.
Both host and SHE are CentOS 7.2 and oVirt version is 126.96.36.199-1.el7
I notice that having 3 VMs powered on and making nothing special (engine
VM, a CentOS 7 VM and a Fedora 24 VM) the ovirt-ha-agent process on host
often spikes its cpu usage.
See for example this quick video with top command running on host that
reflects what happens continuously.
Is it normal that ovirt-ha-agent consumes all this amount of cpu?
Going into /var/log/ovirt-hosted-engine-ha/agent.log I see nothing special,
only messages of type "INFO". The same for broker.log
This is regarding HA VM fail to restart on other host.
I have setup, which has 2 host in a cluster let say host1 and host2.
And one HA VM (with High priority), say vm1.
And also not storage domain is configure on host3 and it available all time.
1> Initially vm1 was running on host2.
2> Then I power OFF host2 to see whether ovirt start vm1 on host1.
I found two result in this case as below:
1> Sometime vm1 retrying to start but retrying on host2 itself.
2> Sometime vm1 move in down state without retrying.
Can anyone explain about this behaviour ? Or Is this an issue ?
Note : I am using Ovirt 4.0.
i'm still testing ovirt 4.1.
i installed engine and 2 nodes in vanilla centos 7.3 hosts with
everything that came from
i regularly checked for updates in the engine host OS with "yum update"
(is there a gui option for this?). it obviously got an ovirt update from
version 4.1.0 to 188.8.131.52 already some time ago.
i regularly checked for updates in the nodes via the ovirt web gui
(installation - check for upgrade). there where package updates
available and installed in the past so i thought that everything was fine.
now i checked with "yum check-update" in the nodes OS shell and noticed
that ovirt-release41 is still on 4.1.0 and there are 81 packages
available for update (from centos base _and_ ovirt repos including
ovirt-release41 itself). ovirt gui tells me 'no updates found'.
why didn't these updates get installed? is it because of the
ovirt-release41 update? do i have to do this manually with yum?
I just want to confirm if we have oVIRT running say 3 VM's on the same host
and the host is configured for bridging would the traffic stay on the host
or would it technically hit the switch and back to the host?
Red Hat Certified Architect, LinuxStack
602-354-1220 || devin(a)linuxguru.co