ILO2 Fencing
by TomK
Hey Guy's,
I've tested my ILO2 fence from the ovirt engine CLI and that works:
fence_ilo2 -a 192.168.0.37 -l <USER> --password="<SECRET>"
--ssl-insecure --tls1.0 -v -o status
The UI gives me:
Test failed: Failed to run fence status-check on host
'ph-host01.my.dom'. No other host was available to serve as proxy for
the operation.
Going to add a second host in a bit but anyway to get this working with
just one host? I'm just adding the one host to oVirt for some POC we
are doing atm but the UI forces me to adjust Power Management settings
before proceeding.
Also:
2018-03-28 02:04:15,183-04 WARN
[org.ovirt.engine.core.bll.network.NetworkConfigurator]
(EE-ManagedThreadFactory-engine-Thread-335) [2d691be9] Failed to find a
valid interface for the management network of host ph-host01.my.dom. If
the interface br0 is a bridge, it should be torn-down manually.
2018-03-28 02:04:15,184-04 ERROR
[org.ovirt.engine.core.bll.hostdeploy.InstallVdsInternalCommand]
(EE-ManagedThreadFactory-engine-Thread-335) [2d691be9] Exception:
org.ovirt.engine.core.bll.network.NetworkConfigurator$NetworkConfiguratorException:
Interface br0 is invalid for management network
I've these defined as such but not clear what it is expecting:
[root@ph-host01 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq
master bond0 state UP qlen 1000
link/ether 78:e7:d1:8c:b1:ba brd ff:ff:ff:ff:ff:ff
3: eth1: <NO-CARRIER,BROADCAST,MULTICAST,SLAVE,UP> mtu 1500 qdisc mq
master bond0 state DOWN qlen 1000
link/ether 78:e7:d1:8c:b1:ba brd ff:ff:ff:ff:ff:ff
4: eth2: <NO-CARRIER,BROADCAST,MULTICAST,SLAVE,UP> mtu 1500 qdisc mq
master bond0 state DOWN qlen 1000
link/ether 78:e7:d1:8c:b1:ba brd ff:ff:ff:ff:ff:ff
5: eth3: <NO-CARRIER,BROADCAST,MULTICAST,SLAVE,UP> mtu 1500 qdisc mq
master bond0 state DOWN qlen 1000
link/ether 78:e7:d1:8c:b1:ba brd ff:ff:ff:ff:ff:ff
21: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc
noqueue master br0 state UP qlen 1000
link/ether 78:e7:d1:8c:b1:ba brd ff:ff:ff:ff:ff:ff
inet6 fe80::7ae7:d1ff:fe8c:b1ba/64 scope link
valid_lft forever preferred_lft forever
23: ;vdsmdummy;: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
qlen 1000
link/ether fe:69:c7:50:0d:dd brd ff:ff:ff:ff:ff:ff
24: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state
UP qlen 1000
link/ether 78:e7:d1:8c:b1:ba brd ff:ff:ff:ff:ff:ff
inet 192.168.0.39/23 brd 192.168.1.255 scope global br0
valid_lft forever preferred_lft forever
inet6 fe80::7ae7:d1ff:fe8c:b1ba/64 scope link
valid_lft forever preferred_lft forever
[root@ph-host01 ~]# cd /etc/sysconfig/network-scripts/
[root@ph-host01 network-scripts]# cat ifcfg-br0
DEVICE=br0
TYPE=Bridge
BOOTPROTO=none
IPADDR=192.168.0.39
NETMASK=255.255.254.0
GATEWAY=192.168.0.1
ONBOOT=yes
DELAY=0
USERCTL=no
DEFROUTE=yes
NM_CONTROLLED=no
DOMAIN="my.dom nix.my.dom"
SEARCH="my.dom nix.my.dom"
HOSTNAME=ph-host01.my.dom
DNS1=192.168.0.224
DNS2=192.168.0.44
DNS3=192.168.0.45
ZONE=public
[root@ph-host01 network-scripts]# cat ifcfg-bond0
DEVICE=bond0
ONBOOT=yes
BOOTPROTO=none
USERCTL=no
NM_CONTROLLED=no
BONDING_OPTS="miimon=100 mode=2"
BRIDGE=br0
#
#
# IPADDR=192.168.0.39
# NETMASK=255.255.254.0
# GATEWAY=192.168.0.1
# DNS1=192.168.0.1
[root@ph-host01 network-scripts]#
--
Cheers,
Tom K.
-------------------------------------------------------------------------------------
Living on earth is expensive, but it includes a free trip around the sun.
6 years, 8 months
Any monitoring tool provided?
by Terry hey
Dear all,
Now, we can just read how many storage used, cpu usage on ovirt dashboard.
But is there any monitoring tool for monitoring virtual machine time to
time?
If yes, could you guys give me the procedure?
Regards
Terry
6 years, 8 months
Cache to NFS
by Marcelo Leandro
Hello,
I have 1 server configured with RAID 6 36TB HDD, I would like improve the
performance, I read about lvmcache with SSD and would like know if its
indicated to configure with NFS and how can calculete the size necessary to
ssd.
Very Thanks.
6 years, 8 months
Deploy Self-Hosted Engine in a Active Host
by FERNANDO FREDIANI
Hello
As I mentioned in another thread I am migrating a 'Bare-metal' oVirt-Engine
to a Self-Hosted Engine.
For that I am following this documentation:
https://ovirt.org/documentation/self-hosted/chap-Migrating_from_Bare_Meta...
However I think called me attention and I wanted to clarity: Must the Host
that will deploy the Self-Hosted Engine be in Maintenance mode and
therefore with no other VMs running ?
I have a Node which is currently part of a Cluster and wish to deploy the
Self-Hosted Engine to it. Must I have to put it into Maintenance mode first
or can I just run the 'hosted-engine --deploy'.
Note: this Self-Hosted Engine will manage the existing cluster where this
Node exists. Guess that is not an issue at all and part of what Self-Hosted
Engine is intended to.
Thanks
Fernando
6 years, 8 months
oVirt Node Resize tool for local storage
by Matt Simonsen
Hello,
We have a development box with local storage, running ovirt Node 4.1
It appears that using the admin interface on port 9090 I can resize a
live partition to a smaller size.
Our storage is a seperate LVM partition, ext4 formated.
My question is, both theoretically and practically, if anyone has
feedback on:
#1: Does this work (ie- will it shrink the filesystem then shrink the LV)?
#2: May we do this with VMs running?
Thanks
Matt
6 years, 8 months
Re: [ovirt-users] Snapshot of the Self-Hosted Engine
by FERNANDO FREDIANI
Hello Sven and all.
Yes storage does have the snapshot function and could be possibility be
used, but I was wondering a even easier way through the oVirt Node CLI or
something similar that can use the qcow2 image snapshot to do that with the
Self-Hosted Engine in Global Maintenance.
I used to run the oVirt Engine in a Libvirt KVM Virtual Machine in a
separate Host and it has always been extremely handy to have this feature.
There has been times where the upgrade was not successfully and just
turning off the VM, starting it from snapshot saved my day.
Regards
Fernando
2018-03-27 14:14 GMT-03:00 Sven Achtelik <Sven.Achtelik(a)eps.aero>:
> Hi Fernando,
>
>
>
> depending on where you’re having your storage you could set everything to
> global maintenance, stop the vm and copy the disk image. Or if your storage
> systeme is able to do snapshots you could use that function once the engine
> is stopped. It’s the easiest way I can think of right now. What kind of
> storage are you using ?
>
>
>
> Sven
>
>
>
> *Von:* users-bounces(a)ovirt.org [mailto:users-bounces@ovirt.org] *Im
> Auftrag von *FERNANDO FREDIANI
> *Gesendet:* Dienstag, 27. März 2018 15:24
> *An:* users
> *Betreff:* [ovirt-users] Snapshot of the Self-Hosted Engine
>
>
>
> Hello
>
> Is it possible to snapshot the Self-Hosted Engine before an Upgrade ? If
> so how ?
>
> Thanks
>
> Fernando
>
6 years, 8 months
Ping::(action) Failed to ping x.x.x.x, (4 out of 5)
by info@linuxfabrik.ch
Hi all,
we randomly and constantly have this message in our /var/log/ovirt-
hosted-engine-ha/broker.log:
/var/log/ovirt-hosted-engine-ha/broker.log:Thread-1::WARNING::2018-03-
27 08:17:25,891::ping::63::ping.Ping::(action) Failed to ping x.x.x.x,
(4 out of 5)
The pinged device is a switch (not a gateway). We know that a switch
might drop icmp packets if it needs to. The interesting thing about
that is if it fails it fails always at "4 out of 5", but in the end (5
of 5) it always succeeds.
Is there a way to increase the amount of pings or to have another way
instead of ping?
Regards
Markus
6 years, 8 months
Testing oVirt 4.2
by wodel youchi
Hi,
I am testing oVirt 4.2, I am using nested KVM for that.
I am using two hypervisors Centos 7 updated and the hosted-Engine
deployment using the ovirt appliance.
For storage I am using iscsi and NFS4
Versions I am using :
ovirt-engine-appliance-4.2-20180214.1.el7.centos.noarch
ovirt-hosted-engine-setup-2.2.9-1.el7.centos.noarch
kernel-3.10.0-693.21.1.el7.x86_64
I have a problem deploying the hosted-engine VM, when configuring the
deployment (hosted-engine --deploy), it asks for the engine's hostname then
the engine's IP address, I use static IP, in my lab I used *192.168.1.104* as
IP for the VM engine, and I choose to add the it's hostname entry to the
hypervisors's /etc/hosts
But the deployment get stuck every time in the same place : *TASK [Wait for
the host to become non operational]*
After some time, it gave up and the deployment fails.
I don't know the reason for now, but I have seen this behavior in */etc/hosts
*of the hypervisor.
In the beginning of the deployment the entry *192.168.2.104
engine01.example.local* is added, then sometime after that it's deleted,
then a new entry is added with this IP *192.168.122.65 engine01.wodel.wd* which
has nothing to do with the network I am using.
Here is the error I am seeing in the deployment log
2018-03-24 11:51:31,398+0100 INFO
otopi.ovirt_hosted_engine_setup.ansible_utils
ansible_utils._process_output:100 TASK [Wait
for the host to become non operational]
2018-03-24 12:02:07,284+0100 DEBUG
otopi.ovirt_hosted_engine_setup.ansible_utils
ansible_utils._process_output:94 {u'_ansible
_parsed': True, u'_ansible_no_log': False, u'changed': False, u'attempts':
150, u'invocation': {u'module_args': {u'pattern':
u'name=hyperv01.wodel.wd', u'fetch_nested': False, u'nested_attributes':
[]}}, u'ansible_facts': {u'ovirt_hosts': []}}
2018-03-24 12:02:07,385+0100 ERROR
otopi.ovirt_hosted_engine_setup.ansible_utils
ansible_utils._process_output:98 fatal: [loc
alhost]: FAILED! => {"ansible_facts": {"ovirt_hosts": []}, "attempts": 150,
"changed": false}
2018-03-24 12:02:07,587+0100 DEBUG
otopi.ovirt_hosted_engine_setup.ansible_utils
ansible_utils._process_output:94 PLAY RECAP
[engine01.wodel.wd] : ok: 15 changed: 8 unreachable: 0 skipped: 4 failed: 0
2018-03-24 12:02:07,688+0100 DEBUG
otopi.ovirt_hosted_engine_setup.ansible_utils
ansible_utils._process_output:94 PLAY RECAP
[localhost] : ok: 41 changed: 14 unreachable: 0 skipped: 3 failed: 1
2018-03-24 12:02:07,789+0100 DEBUG
otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:180
ansible-playbook rc: 2
2018-03-24 12:02:07,790+0100 DEBUG
otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:187
ansible-playbook stdou
t:
2018-03-24 12:02:07,791+0100 DEBUG
otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:189 to
retry, use: --limi
t @/usr/share/ovirt-hosted-engine-setup/ansible/bootstrap_local_vm.retry
2018-03-24 12:02:07,791+0100 DEBUG
otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:190
ansible-playbook stder
r:
2018-03-24 12:02:07,792+0100 DEBUG otopi.context context._executeMethod:143
method exception
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/otopi/context.py", line 133, in
_executeMethod
method['method']()
File
"/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/gr-he-ansiblesetup/core/misc.py",
line 186, in _closeup
r = ah.run()
File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_setup/ansible_utils.py",
line 194, in run
raise RuntimeError(_('Failed executing ansible-playbook'))
RuntimeError: Failed executing ansible-playbook
2018-03-24 12:02:07,795+0100 ERROR otopi.context context._executeMethod:152
Failed to execute stage 'Closing up': Failed exec
uting ansible-playbook
any idea????
Regards
6 years, 8 months