oVirt SHE 4.2.8 and CentOS 7.6
by Vrgotic, Marko
Hi all,
Looking at 4.2.8 and How Support version is EL 7.5.
Does that mean that is not recommended to upgrade already running SHE to higher than EL 7.5, or it means not to deploy it on EL version lower than 7.5?
Thank you.
Marko Vrgotic
6 years, 3 months
Re: converting oVirt host to be able to hosts to a Self-Hosted Environment
by Simone Tiraboschi
On Mon, Jan 28, 2019 at 1:31 PM Jarosław Prokopowski <jprokopowski(a)gmail.com>
wrote:
> Thanks Simone
>
> I forgot to mention that this is to have second node that is able to host
> the self-hosted engine for HA purposes.
> There is already one node that hosts the self-hosted engine and I want to
> have second one.
> Will it work in this case?
>
No, in this case it's by far easier:
you should just set that host into maintenance mode from the engine and
remember to switch on the hosted-engine deployment flag when you go to
reinstall the host.
>
>
>
> On Mon, Jan 28, 2019 at 1:25 PM Simone Tiraboschi <stirabos(a)redhat.com>
> wrote:
>
>>
>>
>> On Mon, Jan 28, 2019 at 12:21 PM Jarosław Prokopowski <
>> jprokopowski(a)gmail.com> wrote:
>>
>>> Hi Guys,
>>>
>>> Is there a way to convert existing oVirt node (CentOS) to be able to
>>> host a Self-Hosted Environment?
>>> If so how can I do that?
>>>
>>
>> Hi, the best option is with backup and restore.
>> Basically you should take a backup of your current engine with
>> engine-backup
>> and then deploy hosted-engine with hosted-engine --deploy
>> --restore-from-file=mybackup.tar.gz
>>
>>
>>>
>>> Thanks
>>> Jaroson
>>> _______________________________________________
>>> Users mailing list -- users(a)ovirt.org
>>> To unsubscribe send an email to users-leave(a)ovirt.org
>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>> oVirt Code of Conduct:
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives:
>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZVY4SD5RTNV...
>>>
>>
6 years, 3 months
Ovirt snapshot issues
by Alex K
Hi all,
I have ovirt 4.2.7, self-hosted on top gluster, with two servers.
I have a specific VM which has encountered some snapshot issues.
The engine lists 4 snapshots and when trying to delete one of them I
get "General
command validation failure".
The VM was being backed up periodically by a python script which was
creating a snapshot -> clone -> export -> delete clone -> delete snapshot.
There were times where the VM was complaining of some illegal snapshots
following such backup procedures and I had to delete such illegal snapshots
references from the engine DB (following some steps found online),
otherwise I would not be able to start the VM if it was shut down. Seems
though that this is not a clean process and leaves the underlying image of
the VM in an inconsistent state in regards to its snapshots as when
checking the backing chain of the image file I get:
*b46d8efe-885b-4a68-94ca-e8f437566bee* (active VM)* ->*
*b7673dca-6e10-4a0f-9885-1c91b86616af ->*
*4f636d91-a66c-4d68-8720-d2736a3765df ->*
6826cb76-6930-4b53-a9f5-fdeb0e8012ac ->
61eea475-1135-42f4-b8d1-da6112946bac ->
*604d84c3-8d5f-4bb6-a2b5-0aea79104e43 ->*
1e75898c-9790-4163-ad41-847cfe84db40 ->
*cf8707f2-bf1f-4827-8dc2-d7e6ffcc3d43 ->*
3f54c98e-07ca-4810-82d8-cbf3964c7ce5 (raw image)
The bold ones are the ones shown at engine GUI. The VM runs normally
without issues.
I was thinking if I could use qemu-img commit to consolidate and remove the
snapshots that are not referenced from engine anymore. Any ideas from your
side?
Thanx,
Alex
6 years, 3 months
Nvidia Grid K2 and Ovirt GPU Passtrough
by jarheadx@hotmail.de
Hello,
i have tried every documentation that i found to bring GPU Passtrough with a Nvidia Grid K2 to ovirt, but i failed.
I am confident that it has to run with my hardware, but i have no ideas anymore. Maybe the community can help me.
Actually my VM (Win7 and Win10) crashes or hangs on startup.
I have done theese steps:
1. lspci -nnk
.........
07:00.0 VGA compatible controller [0300]: NVIDIA Corporation GK104GL [GRID K2] [10de:11bf] (rev a1)
Subsystem: NVIDIA Corporation Device [10de:100a]
Kernel driver in use: pci-stub
Kernel modules: nouveau
08:00.0 VGA compatible controller [0300]: NVIDIA Corporation GK104GL [GRID K2] [10de:11bf] (rev a1)
Subsystem: NVIDIA Corporation Device [10de:100a]
Kernel driver in use: pci-stub
Kernel modules: nouveau
.........
2. /etc/default/grub
.........
GRUB_CMDLINE_LINUX="crashkernel=auto rd.lvm.lv=cl/root rd.lvm.lv=cl/swap rhgb quiet pci-stub.ids=10de:11bf rdblacklist=nouveau amd_iommu=on"
........
3.
Added Line
"options vfio-pci ids=10de:11bf"
to nano /etc/modprobe.d/vfio.conf
dmesg | grep -i vfio ->
[ 11.202767] VFIO - User Level meta-driver version: 0.3
[ 11.315368] vfio_pci: add [10de:11bf[ffff:ffff]] class 0x000000/00000000
[ 1032.582778] vfio_ecap_init: 0000:07:00.0 hiding ecap 0x19@0x900
[ 1046.232009] vfio-pci 0000:07:00.0: irq 61 for MSI/MSI-X
-------------------------------------------------------------------------
After assigning the GPU to the VM, the OS hangs on startup
Any ideas?
Best Regards
Reza
6 years, 3 months
Using oVirt for Desktop Virtualization
by jarheadx@hotmail.de
Hey guys,
i want to use ovirt as my desktop enviroement. Therefore i bought a GRID K2 for GPU Passtrough. But i have still lags on video streaming for example HD Videos.
I used Spice, VNC and Remote Desktop but i guess they are not good on video compression over WAN or LAN.
Is there any other Remote App which allows me to use ovirt as my Virtual Desktop with smooth and reliable video output?
Many Thanks
Volrath
6 years, 3 months
Re: ovirt 4.2 HCI rollout
by Dominik Holler
On Tue, 22 Jan 2019 11:15:12 +0000
Markus Schaufler <markus.schaufler(a)digit-all.at> wrote:
> Thanks for your reply,
>
> getent ahosts ovirt-hci.res01.ads.ooe.local | cut -d' ' -f1 | uniq
> 10.1.31.20
>
> attached you'll find the logs.
>
Thanks, to my eyes this looks like a bug.
I tried to isolate the relevant lines in the attached playbook.
Markus, would you be so kind to check if ovirt-4.2.8 is working for you?
> ________________________________
> Von: Dominik Holler <dholler(a)redhat.com>
> Gesendet: Montag, 21. Jänner 2019 17:52:35
> An: Markus Schaufler
> Cc: users(a)ovirt.org; Simone Tiraboschi
> Betreff: Re: [ovirt-users] ovirt 4.2 HCI rollout
>
> Would you please share the related ovirt-host-deploy-ansible-*.log
> stored on the host in /var/log/ovirt-hosted-engine-setup ?
>
> Would you please also share the output of
> getent ahosts YOUR_HOSED_ENGNE_FQDN | cut -d' ' -f1 | uniq
> if executed on this host?
>
>
> On Mon, 21 Jan 2019 13:37:53 -0000
> "Markus Schaufler" <markus.schaufler(a)digit-all.at> wrote:
>
> > Hi,
> >
> > I'm trying a (nested) ovirt 4.2.7 HCI rollout on 3 centos VM's by
> > following
> > https://ovirt.org/documentation/gluster-hyperconverged/chap-Deploying_Hyp...
> > gluster deployment was successful but at HE deployment "stage 5" I
> > got following error:
> >
> > [ INFO ] TASK [Reconfigure OVN central address]
> > [ ERROR ] fatal: [localhost]: FAILED! => {"msg": "The task includes
> > an option with an undefined variable. The error was: 'dict object'
> > has no attribute 'stdout_lines'\n\nThe error appears to have been
> > in
> > '/usr/share/ovirt-hosted-engine-setup/ansible/create_target_vm.yml':
> > line 522, column 5, but may\nbe elsewhere in the file depending on
> > the exact syntax problem.\n\nThe offending line appears to be:\n\n
> > #
> > https://github.com/oVirt/ovirt-engine/blob/master/packaging/playbooks/rol...
> > - name: Reconfigure OVN central address\n ^ here\n"}
> >
> >
> > /var/log/messages:
> > Jan 21 14:09:56 HCI01 journal: ovirt-ha-agent
> > ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine ERROR
> > Engine VM stopped on localhost Jan 21 14:10:01 HCI01 systemd:
> > Started Session 22 of user root. Jan 21 14:10:02 HCI01 systemd:
> > Started Session c306 of user root. Jan 21 14:10:03 HCI01 systemd:
> > Started Session c307 of user root. Jan 21 14:10:06 HCI01
> > vdsm[3650]: WARN executor state: count=5 workers=set([<Worker
> > name=periodic/4 waiting task#=141 at 0x7fd2d4316910>, <Worker
> > name=periodic/1 running <Task discardable <Operation
> > action=<vdsm.virt.sampling.VMBulkstatsMonitor object at
> > 0x7fd2d4679490> at 0x7fd2d4679710> timeout=7.5, duration=7 at
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> disca rded task#=413 at
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>, <Worker
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>name=periodic/3
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>waiting task#=414 at
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>,
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10><Worker
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>name=periodic/5
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>waiting
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>task#=0
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>at
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>,
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650><Worker
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>name
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>=periodic/2
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>waiting
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>task#=412
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>at
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>])
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>Jan
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>21
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>14:10:06
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>HCI01
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>kernel:
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>ovirtmgmt:
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>port
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>2(vnet0)
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>entered
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>disabled
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>state
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>Jan
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>21
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>14:10:06
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>HCI01
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>kernel:
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>device
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>vnet0
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>left
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>promiscuous
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>mode
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>Jan
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>21
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>14:10:06
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>HCI01
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>kernel:
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>ovirtmgmt:
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>port
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>2(vnet0)
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>entered
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>disabled
> > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>state
> > Jan 21 14:10:06 HCI01 NetworkManager[3666]: <info>
> > [1548076206.9177] device (vnet0): state change: disconnected ->
> > unmanaged (reason 'unmanaged', sys-iface-state: 'remo ved') Jan 21
> > 14:10:06 HCI01 NetworkManager[3666]: <info> [1548076206.9180]
> > device (vnet0): released from master device ovirtmgmt Jan 21
> > 14:10:06 HCI01 lldpad: recvfrom(Event interface): No buffer space
> > available Jan 21 14:10:06 HCI01 libvirtd: 2019-01-21
> > 13:10:06.925+0000: 2651: error : qemuMonitorIORead:609 : Unable to
> > read from monitor: Connection reset by peer Jan 21 14:10:07 HCI01
> > kvm: 0 guests now active Jan 21 14:10:07 HCI01 systemd-machined:
> > Machine qemu-3-HostedEngine terminated. Jan 21 14:10:07 HCI01
> > libvirtd: 2019-01-21 13:10:07.125+0000: 2704: warning :
> > qemuGetProcessInfo:1406 : cannot parse process status data Jan 21
> > 14:10:07 HCI01 libvirtd: 2019-01-21 13:10:07.125+0000: 2704:
> > warning : qemuGetProcessInfo:1406 : cannot parse process status
> > data Jan 21 14:10:07 HCI01 libvirtd: 2019-01-21 13:10:07.125+0000:
> > 2704: warning : qemuGetProcessInfo:1406 : cannot parse process
> > status data Jan 21 14:10:07 HCI01 libvirtd: 2019-01-21
> > 13:10:07.125+0000: 2704: warning : qemuGetProcessInfo:1406 : cannot
> > parse process status data Jan 21 14:10:07 HCI01 libvirtd:
> > 2019-01-21 13:10:07.126+0000: 2704: error :
> > virNetDevTapInterfaceStats:764 : internal error: /proc/net/dev:
> > Interface not found Jan 21 14:10:07 HCI01 firewalld[24040]:
> > WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -D libvirt-out
> > -m physdev --physdev-is-bridged --physdev-out vnet0 -g FP-vnet 0'
> > failed: iptables v1.4.21: goto 'FP-vnet0' is not a chain#012#012Try
> > `iptables -h' or 'iptables --help' for more information. Jan 21
> > 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > '/usr/sbin/iptables -w2 -w -D libvirt-out -m physdev --physdev-out
> > vnet0 -g FP-vnet0' failed: iptables v 1.4.21: goto 'FP-vnet0' is
> > not a chain#012#012Try `iptables -h' or 'iptables --help' for more
> > information. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -D libvirt-in -m physdev
> > --physdev-in vnet0 -g FJ-vnet0' failed: iptables v1. 4.21: goto
> > 'FJ-vnet0' is not a chain#012#012Try `iptables -h' or 'iptables
> > --help' for more information. Jan 21 14:10:07 HCI01
> > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > -w -D libvirt-host-in -m physdev --physdev-in vnet0 -g HJ-vnet0'
> > failed: iptable s v1.4.21: goto 'HJ-vnet0' is not a
> > chain#012#012Try `iptables -h' or 'iptables --help' for more
> > information. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F FP-vnet0' failed:
> > iptables: No chain/target/match by that name. Jan 21 14:10:07 HCI01
> > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > -w -X FP-vnet0' failed: iptables: No chain/target/match by that
> > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F FJ-vnet0' failed:
> > iptables: No chain/target/match by that name. Jan 21 14:10:07 HCI01
> > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > -w -X FJ-vnet0' failed: iptables: No chain/target/match by that
> > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F HJ-vnet0' failed:
> > iptables: No chain/target/match by that name. Jan 21 14:10:07 HCI01
> > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > -w -X HJ-vnet0' failed: iptables: No chain/target/match by that
> > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-out -m
> > physdev --physdev-is-bridged --physdev-out vnet0 -g FP-vne t0'
> > failed: ip6tables v1.4.21: goto 'FP-vnet0' is not a
> > chain#012#012Try `ip6tables -h' or 'ip6tables --help' for more
> > information. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-out -m
> > physdev --physdev-out vnet0 -g FP-vnet0' failed: ip6tables v1.4.21:
> > goto 'FP-vnet0' is not a chain#012#012Try `ip6tables -h' or
> > 'ip6tables --help' for more information. Jan 21 14:10:07 HCI01
> > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> > -w -D libvirt-in -m physdev --physdev-in vnet0 -g FJ-vnet0' failed:
> > ip6tables v 1.4.21: goto 'FJ-vnet0' is not a chain#012#012Try
> > `ip6tables -h' or 'ip6tables --help' for more information. Jan 21
> > 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > '/usr/sbin/ip6tables -w2 -w -D libvirt-host-in -m physdev
> > --physdev-in vnet0 -g HJ-vnet0' failed: ip6tab les v1.4.21: goto
> > 'HJ-vnet0' is not a chain#012#012Try `ip6tables -h' or 'ip6tables
> > --help' for more information. Jan 21 14:10:07 HCI01
> > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> > -w -F FP-vnet0' failed: ip6tables: No chain/target/match by that
> > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -X FP-vnet0' failed:
> > ip6tables: No chain/target/match by that name. Jan 21 14:10:07
> > HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > '/usr/sbin/ip6tables -w2 -w -F FJ-vnet0' failed: ip6tables: No
> > chain/target/match by that name. Jan 21 14:10:07 HCI01
> > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> > -w -X FJ-vnet0' failed: ip6tables: No chain/target/match by that
> > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -F HJ-vnet0' failed:
> > ip6tables: No chain/target/match by that name. Jan 21 14:10:07
> > HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > '/usr/sbin/ip6tables -w2 -w -X HJ-vnet0' failed: ip6tables: No
> > chain/target/match by that name. Jan 21 14:10:07 HCI01
> > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> > --concurrent -t nat -D PREROUTING -i vnet0 -j libvirt-J-vnet0'
> > failed: Illegal target name 'libvirt-J-vnet0'. Jan 21 14:10:07
> > HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > '/usr/sbin/ebtables --concurrent -t nat -D POSTROUTING -o vnet0 -j
> > libvirt-P-vnet0' failed: Illegal target name 'libvirt-P-vnet0'. Jan
> > 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > '/usr/sbin/ebtables --concurrent -t nat -D PREROUTING -i vnet0 -j
> > libvirt-J-vnet0' failed: Illegal targe t name 'libvirt-J-vnet0'.
> > Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > '/usr/sbin/ebtables --concurrent -t nat -D POSTROUTING -o vnet0 -j
> > libvirt-P-vnet0' failed: Illegal targ et name 'libvirt-P-vnet0'.
> > Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > '/usr/sbin/ebtables --concurrent -t nat -L libvirt-J-vnet0' failed:
> > Chain 'libvirt-J-vnet0' doesn't exis t. Jan 21 14:10:07 HCI01
> > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> > --concurrent -t nat -L libvirt-P-vnet0' failed: Chain
> > 'libvirt-P-vnet0' doesn't exist. Jan 21 14:10:07 HCI01
> > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> > --concurrent -t nat -F libvirt-J-vnet0' failed: Chain
> > 'libvirt-J-vnet0' doesn't exist. Jan 21 14:10:07 HCI01
> > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> > --concurrent -t nat -X libvirt-J-vnet0' failed: Chain
> > 'libvirt-J-vnet0' doesn't exist. Jan 21 14:10:07 HCI01
> > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> > --concurrent -t nat -F libvirt-P-vnet0' failed: Chain
> > 'libvirt-P-vnet0' doesn't exist. Jan 21 14:10:07 HCI01
> > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> > --concurrent -t nat -X libvirt-P-vnet0' failed: Chain
> > 'libvirt-P-vnet0' doesn't exist. Jan 21 14:10:07 HCI01
> > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > -w -D libvirt-out -m physdev --physdev-is-bridged --physdev-out
> > vnet0 -g FO-vnet0' failed: iptables v1.4.21: goto 'FO-vnet0' is not
> > a chain#012#012Try `iptables -h' or 'iptables --help' for more
> > information. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -D libvirt-out -m
> > physdev --physdev-out vnet0 -g FO-vnet0' failed: iptables v1.4.21:
> > goto 'FO-vnet0' is not a chain#012#012Try `iptables -h' or
> > 'iptables --help' for more information. Jan 21 14:10:07 HCI01
> > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > -w -D libvirt-in -m physdev --physdev-in vnet0 -g FI-vnet0' failed:
> > iptables v1.4.21: goto 'FI-vnet0' is not a chain#012#012Try
> > `iptables -h' or 'iptables --help' for more information. Jan 21
> > 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > '/usr/sbin/iptables -w2 -w -D libvirt-host-in -m physdev
> > --physdev-in vnet0 -g HI-vnet0' failed: iptables v1.4.21: goto
> > 'HI-vnet0' is not a chain#012#012Try `iptables -h' or 'iptables
> > --help' for more information. Jan 21 14:10:07 HCI01
> > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > -w -D libvirt-in-post -m physdev --physdev-in vnet0 -j ACCEPT'
> > failed: iptables: Bad rule (does a matching rule exist in that
> > chain?). Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F FO-vnet0' failed:
> > iptables: No chain/target/match by that name. Jan 21 14:10:07 HCI01
> > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > -w -X FO-vnet0' failed: iptables: No chain/target/match by that
> > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F FI-vnet0' failed:
> > iptables: No chain/target/match by that name. Jan 21 14:10:07 HCI01
> > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > -w -X FI-vnet0' failed: iptables: No chain/target/match by that
> > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F HI-vnet0' failed:
> > iptables: No chain/target/match by that name. Jan 21 14:10:07 HCI01
> > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > -w -X HI-vnet0' failed: iptables: No chain/target/match by that
> > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-out -m
> > physdev --physdev-is-bridged --physdev-out vnet0 -g FO-vnet0'
> > failed: ip6tables v1.4.21: goto 'FO-vnet0' is not a
> > chain#012#012Try `ip6tables -h' or 'ip6tables --help' for more
> > information. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-out -m
> > physdev --physdev-out vnet0 -g FO-vnet0' failed: ip6tables v1.4.21:
> > goto 'FO-vnet0' is not a chain#012#012Try `ip6tables -h' or
> > 'ip6tables --help' for more information. Jan 21 14:10:07 HCI01
> > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> > -w -D libvirt-in -m physdev --physdev-in vnet0 -g FI-vnet0' failed:
> > ip6tables v1.4.21: goto 'FI-vnet0' is not a chain#012#012Try
> > `ip6tables -h' or 'ip6tables --help' for more information. Jan 21
> > 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > '/usr/sbin/ip6tables -w2 -w -D libvirt-host-in -m physdev
> > --physdev-in vnet0 -g HI-vnet0' failed: ip6tables v1.4.21: goto
> > 'HI-vnet0' is not a chain#012#012Try `ip6tables -h' or 'ip6tables
> > --help' for more information. Jan 21 14:10:07 HCI01
> > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> > -w -D libvirt-in-post -m physdev --physdev-in vnet0 -j ACCEPT'
> > failed: ip6tables: Bad rule (does a matching rule exist in that
> > chain?). Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -F FO-vnet0' failed:
> > ip6tables: No chain/target/match by that name. Jan 21 14:10:07
> > HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > '/usr/sbin/ip6tables -w2 -w -X FO-vnet0' failed: ip6tables: No
> > chain/target/match by that name. Jan 21 14:10:07 HCI01
> > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> > -w -F FI-vnet0' failed: ip6tables: No chain/target/match by that
> > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -X FI-vnet0' failed:
> > ip6tables: No chain/target/match by that name. Jan 21 14:10:07
> > HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > '/usr/sbin/ip6tables -w2 -w -F HI-vnet0' failed: ip6tables: No
> > chain/target/match by that name. Jan 21 14:10:07 HCI01
> > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> > -w -X HI-vnet0' failed: ip6tables: No chain/target/match by that
> > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -D
> > POSTROUTING -o vnet0 -j libvirt-O-vnet0' failed: Illegal target
> > name 'libvirt-O-vnet0'. Jan 21 14:10:07 HCI01 firewalld[24040]:
> > WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -L
> > libvirt-O-vnet0' failed: Chain 'libvirt-O-vnet0' doesn't exist. Jan
> > 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > '/usr/sbin/ebtables --concurrent -t nat -F libvirt-O-vnet0' failed:
> > Chain 'libvirt-O-vnet0' doesn't exist. Jan 21 14:10:07 HCI01
> > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> > --concurrent -t nat -X libvirt-O-vnet0' failed: Chain
> > 'libvirt-O-vnet0' doesn't exist. Jan 21 14:10:07 HCI01 vdsm[3650]:
> > WARN
> > File: /var/lib/libvirt/qemu/channels/ea1b312c-a462-45a9-ab75-78008bc4c9c9.ovirt-guest-agent.0
> > already removed Jan 21 14:10:07 HCI01 vdsm[3650]: WARN Attempting
> > to remove a non existing network:
> > ovirtmgmt/ea1b312c-a462-45a9-ab75-78008bc4c9c9 Jan 21 14:10:07
> > HCI01 vdsm[3650]: WARN Attempting to remove a non existing net
> > user: ovirtmgmt/ea1b312c-a462-45a9-ab75-78008bc4c9c9 Jan 21
> > 14:10:07 HCI01 vdsm[3650]: WARN
> > File: /var/lib/libvirt/qemu/channels/ea1b312c-a462-45a9-ab75-78008bc4c9c9.org.qemu.guest_agent.0
> > already removed
> >
> > any ideas on that?
> > _______________________________________________
> > Users mailing list -- users(a)ovirt.org
> > To unsubscribe send an email to users-leave(a)ovirt.org
> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> > oVirt Code of Conduct:
> > https://www.ovirt.org/community/about/community-guidelines/ List
> > Archives:
> > https://lists.ovirt.org/archives/list/users@ovirt.org/message/XMMX5CY6VHF...
>
6 years, 3 months
Re: Nvidia Grid K2 and Ovirt VGPU
by Michal Skrivanek
> On 11 Jan 2019, at 18:11, R A <jarheadx(a)hotmail.de> wrote:
>
> Hello Michal,
>
> many thanks for your response.
>
> Are you sure that there is no vGPU support? Please check lines below
Hi Reza,
it’s nvidia’s doc, you would have to talk to them if you have more questions. All I understand from that doc (it’s just little more below on the same page) is that they explicitly say that K1 and K2 do not support vGPU. This might be a licensing limitation or temporary or anything else, I don’t know.
Thanks,
michal
>
> <image.png>
>
>
> and here:
>
> <image.png>
>
> The NVIDIA Grid K2 is GRID capaable.
>
> Many thanks!
> Reza
>
> Von: Michal Skrivanek <michal.skrivanek(a)redhat.com <mailto:michal.skrivanek@redhat.com>>
> Gesendet: Freitag, 11. Januar 2019 12:04
> An: jarheadx(a)hotmail.de <mailto:jarheadx@hotmail.de>
> Cc: users(a)ovirt.org <mailto:users@ovirt.org>
> Betreff: Re: [ovirt-users] Nvidia Grid K2 and Ovirt VGPU
>
> > On 10 Jan 2019, at 02:14, jarheadx(a)hotmail.de <mailto:jarheadx@hotmail.de> wrote:
> >
> > Hello,
> >
> > sry for my bad english.
> >
> > I have just bought the NVIDIA Grid K2 and want to passtrough the vGPU to several VMS. But i am not able to manage this.
> > I need the driver for Guest and Host, but can not download it. On the official Site of Nvidia there are only drivers for XenServer and VMWare.
> >
> > This Nvidia-Site tells that there is support for Ovirt (RHEV) and K2 ( <applewebdata://881AA05A-2E7C-4482-AEBB-52EA439E3294>https://docs.nvidia.com/grid/4.6/grid-vgpu-release-notes-red-hat-el-kvm/i... <https://docs.nvidia.com/grid/4.6/grid-vgpu-release-notes-red-hat-el-kvm/i...> )
>
> The page says K1 and K2 do not support vGPU so you could only do Host
> Device Passthrough of the whole GPU
>
> >
> > Can someone please tell me what is going on? Is there any possibility to run Grid K2 in Ovirt with vGpu (or Host Device Passtrough?)
> >
> > Furthermore if I do "vdsm-client Host hostdevListByCaps" on the Host , i do not get any mdev markers like described here:https://www.ovirt.org/documentation/install-guide/appe-Preparing_a_H... <https://www.ovirt.org/documentation/install-guide/appe-Preparing_a_Host_f...>
>
> That’s only for vgpu which it doesn’t support, so you do not see any mdev
>
> >
> > Many Thanks
> > Volrath
> > _______________________________________________
> > Users mailing list -- users(a)ovirt.org <mailto:users@ovirt.org>
> > To unsubscribe send an email to users-leave(a)ovirt.org <mailto:users-leave@ovirt.org>
> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/ <https://www.ovirt.org/site/privacy-policy/>
> > oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ <https://www.ovirt.org/community/about/community-guidelines/>
> > List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/FBIEV6OCU4V... <https://lists.ovirt.org/archives/list/users@ovirt.org/message/FBIEV6OCU4V...>
6 years, 3 months
Cluster upgrade with ansible only upgrades one host then stops
by Jayme
I have a three node HCI setup, running 4.2.7 and want to upgrade to 4.2.8.
When I use ansible to perform the host updates for some reason it fully
updates one host then stops without error, it does not continue upgrading
the remaining two hosts. If I run it again it will proceed to upgrade the
next host. Is there something wrong with the ansible plays I am using or
perhaps the command I'm using to run ansible needing to specify all hosts
to run against. I don't understand why it's not upgrading all hosts in one
single run. Here is the complete ansible output of the last run, in this
example it fully updated and rebooted host0 with no errors but did not
proceed to upgrade host1 or host2:
$ cat ovirt-upgrade
# ansible-playbook --ask-vault-pass upgrade.yml
Vault password:
PLAY [oVirt Cluster Upgrade]
*************************************************************************************************************************************************************************************************************************************************************************************************
TASK [oVirt.cluster-upgrade : set_fact]
**************************************************************************************************************************************************************************************************************************************************************************************
ok: [localhost]
TASK [oVirt.cluster-upgrade : Login to oVirt]
********************************************************************************************************************************************************************************************************************************************************************************
ok: [localhost]
TASK [oVirt.cluster-upgrade : Get hosts]
*************************************************************************************************************************************************************************************************************************************************************************************
ok: [localhost]
TASK [oVirt.cluster-upgrade : Check if there are hosts to be updated]
********************************************************************************************************************************************************************************************************************************************************
skipping: [localhost]
TASK [oVirt.cluster-upgrade : include_tasks]
*********************************************************************************************************************************************************************************************************************************************************************************
included:
/usr/share/ansible/roles/ovirt.cluster-upgrade/tasks/cluster_policy.yml for
localhost
TASK [oVirt.cluster-upgrade : Get cluster facts]
*****************************************************************************************************************************************************************************************************************************************************************************
ok: [localhost]
TASK [oVirt.cluster-upgrade : Get name of the original scheduling policy]
****************************************************************************************************************************************************************************************************************************************************
ok: [localhost]
TASK [oVirt.cluster-upgrade : Remember the cluster scheduling policy]
********************************************************************************************************************************************************************************************************************************************************
ok: [localhost]
TASK [oVirt.cluster-upgrade : Remember the cluster scheduling policy
properties]
*********************************************************************************************************************************************************************************************************************************************
ok: [localhost]
TASK [oVirt.cluster-upgrade : Get API facts]
*********************************************************************************************************************************************************************************************************************************************************************************
ok: [localhost]
TASK [oVirt.cluster-upgrade : Set in cluster upgrade policy]
*****************************************************************************************************************************************************************************************************************************************************************
changed: [localhost]
TASK [oVirt.cluster-upgrade : Get list of VMs in cluster]
********************************************************************************************************************************************************************************************************************************************************************
ok: [localhost]
[ INFO ] Configuring WebSocket Proxy
skipping: [localhost] => (item=vm1)
skipping: [localhost] => (item=vm2)
skipping: [localhost] => (item=vm3)
TASK [oVirt.cluster-upgrade : Create list of VM names which was shutted
down]
************************************************************************************************************************************************************************************************************************************************
skipping: [localhost]
TASK [oVirt.cluster-upgrade : include_tasks]
*********************************************************************************************************************************************************************************************************************************************************************************
included: /usr/share/ansible/roles/ovirt.cluster-upgrade/tasks/upgrade.yml
for localhost => (item={u'comment': u'', u'update_available': True,
u'protocol': u'stomp', u'affinity_labels': [], u'hooks': [],
u'max_scheduling_memory': 235034116096, u'cluster': {u'href':
u'/ovirt-engine/api/clusters/a45fe964-9989-11e8-b3f7-00163e4bf18a', u'id':
u'a45fe964-9989-11e8-b3f7-00163e4bf18a'}, u'href':
u'/ovirt-engine/api/hosts/771c67eb-56e6-4736-8c67-668502d4ecf5', u'spm':
{u'priority': 5, u'status': u'none'}, u'id':
u'771c67eb-56e6-4736-8c67-668502d4ecf5', u'external_status': u'ok',
u'statistics': [], u'certificate': {u'organization': u'xxxxxxxx.com',
u'subject': u'O=xxxxxxxx.com,CN=host0.xxxxxxxx.com'}, u'nics': [],
u'iscsi': {u'initiator': u'iqn.1994-05.com.redhat:ac5566ae1e1a'}, u'port':
54321, u'hardware_information': {u'serial_number': u'70833W1',
u'product_name': u'PowerEdge R720', u'uuid':
u'4C4C4544-0030-3810-8033-B7C04F335731', u'supported_rng_sources':
[u'hwrng', u'random'], u'manufacturer': u'Dell Inc.'}, u'version':
{u'full_version': u'vdsm-4.20.43-1.el7', u'revision': 0, u'build': 43,
u'minor': 20, u'major': 4}, u'memory': 270258929664, u'ksm': {u'enabled':
False}, u'se_linux': {u'mode': u'permissive'}, u'type': u'ovirt_node',
u'storage_connection_extensions': [], u'status': u'up', u'tags': [],
u'katello_errata': [], u'external_network_provider_configurations': [],
u'ssh': {u'port': 22, u'fingerprint':
u'SHA256:TiiMiaHi++kqhmurXdlQjtwbb3Q1oIBi5ab2alKujyI'}, u'address': u'
host0.xxxxxxxx.com', u'numa_nodes': [], u'device_passthrough': {u'enabled':
False}, u'unmanaged_networks': [], u'permissions': [], u'numa_supported':
True, u'libvirt_version': {u'full_version': u'libvirt-3.9.0-14.el7_5.8',
u'revision': 0, u'build': 0, u'minor': 9, u'major': 3},
u'power_management': {u'kdump_detection': True, u'enabled': True,
u'pm_proxies': [{u'type': u'cluster'}, {u'type': u'dc'}],
u'automatic_pm_enabled': True}, u'name': u'host0', u'devices': [],
u'summary': {u'active': 4, u'migrating': 0, u'total': 4},
u'auto_numa_status': u'enable', u'transparent_huge_pages': {u'enabled':
True}, u'network_attachments': [], u'os': {u'version': {u'full_version':
u'7 - 5.1804.5.el7.centos', u'major': 7}, u'type': u'RHEL',
u'custom_kernel_cmdline': u'', u'reported_kernel_cmdline':
u'BOOT_IMAGE=/ovirt-node-ng-4.2.7.1-0.20181114.0+1/vmlinuz-3.10.0-862.14.4.el7.x86_64
root=/dev/onn_host0/ovirt-node-ng-4.2.7.1-0.20181114.0+1 ro
crashkernel=auto rd.lvm.lv=onn_host0/swap
rd.lvm.lv=onn_host0/ovirt-node-ng-4.2.7.1-0.20181114.0+1
rhgb quiet LANG=en_CA.UTF-8
img.bootid=ovirt-node-ng-4.2.7.1-0.20181114.0+1'}, u'cpu': {u'speed':
1200.0, u'name': u'Intel(R) Xeon(R) CPU E5-2690 0 @ 2.90GHz', u'topology':
{u'cores': 8, u'threads': 1, u'sockets': 2}}, u'kdump_status': u'disabled'})
included: /usr/share/ansible/roles/ovirt.cluster-upgrade/tasks/upgrade.yml
for localhost => (item={u'comment': u'', u'update_available': True,
u'protocol': u'stomp', u'affinity_labels': [], u'hooks': [],
u'max_scheduling_memory': 250759610368, u'cluster': {u'href':
u'/ovirt-engine/api/clusters/a45fe964-9989-11e8-b3f7-00163e4bf18a', u'id':
u'a45fe964-9989-11e8-b3f7-00163e4bf18a'}, u'href':
u'/ovirt-engine/api/hosts/fb1e62d5-1dc1-4ccc-8b2b-cf48f7077d0d', u'spm':
{u'priority': 5, u'status': u'none'}, u'id':
u'fb1e62d5-1dc1-4ccc-8b2b-cf48f7077d0d', u'external_status': u'ok',
u'statistics': [], u'certificate': {u'organization': u'xxxxxxxx.com',
u'subject': u'O=xxxxxxxx.com,CN=host1.xxxxxxxx.com'}, u'nics': [],
u'iscsi': {u'initiator': u'iqn.1994-05.com.redhat:2f5ea98fcec4'}, u'port':
54321, u'hardware_information': {u'serial_number': u'FYRTGX1',
u'product_name': u'PowerEdge R720', u'uuid':
u'4C4C4544-0059-5210-8054-C6C04F475831', u'supported_rng_sources':
[u'hwrng', u'random'], u'manufacturer': u'Dell Inc.'}, u'version':
{u'full_version': u'vdsm-4.20.43-1.el7', u'revision': 0, u'build': 43,
u'minor': 20, u'major': 4}, u'memory': 270258929664, u'ksm': {u'enabled':
False}, u'se_linux': {u'mode': u'permissive'}, u'type': u'ovirt_node',
u'storage_connection_extensions': [], u'status': u'up', u'tags': [],
u'katello_errata': [], u'external_network_provider_configurations': [],
u'ssh': {u'port': 22, u'fingerprint':
u'SHA256:HGU2WhIlPfxgXHXHOE8IYT4/NGAQeUBRe0QA34qQ+nc'}, u'address': u'
host1.xxxxxxxx.com', u'numa_nodes': [], u'device_passthrough': {u'enabled':
False}, u'unmanaged_networks': [], u'permissions': [], u'numa_supported':
True, u'libvirt_version': {u'full_version': u'libvirt-3.9.0-14.el7_5.8',
u'revision': 0, u'build': 0, u'minor': 9, u'major': 3},
u'power_management': {u'kdump_detection': True, u'enabled': True,
u'pm_proxies': [{u'type': u'cluster'}, {u'type': u'dc'}],
u'automatic_pm_enabled': True}, u'name': u'host1', u'devices': [],
u'summary': {u'active': 4, u'migrating': 0, u'total': 4},
u'auto_numa_status': u'enable', u'transparent_huge_pages': {u'enabled':
True}, u'network_attachments': [], u'os': {u'version': {u'full_version':
u'7 - 5.1804.5.el7.centos', u'major': 7}, u'type': u'RHEL',
u'custom_kernel_cmdline': u'', u'reported_kernel_cmdline':
u'BOOT_IMAGE=/ovirt-node-ng-4.2.7.1-0.20181114.0+1/vmlinuz-3.10.0-862.14.4.el7.x86_64
root=/dev/onn_host1/ovirt-node-ng-4.2.7.1-0.20181114.0+1 ro
crashkernel=auto rd.lvm.lv=onn_host1/ovirt-node-ng-4.2.7.1-0.20181114.0+1
rd.lvm.lv=onn_host1/swap rhgb quiet LANG=en_CA.UTF-8
img.bootid=ovirt-node-ng-4.2.7.1-0.20181114.0+1'}, u'cpu': {u'speed':
1200.0, u'name': u'Intel(R) Xeon(R) CPU E5-2690 0 @ 2.90GHz', u'topology':
{u'cores': 8, u'threads': 1, u'sockets': 2}}, u'kdump_status': u'disabled'})
included: /usr/share/ansible/roles/ovirt.cluster-upgrade/tasks/upgrade.yml
for localhost => (item={u'comment': u'', u'update_available': True,
u'protocol': u'stomp', u'affinity_labels': [], u'hooks': [],
u'max_scheduling_memory': 232492367872, u'cluster': {u'href':
u'/ovirt-engine/api/clusters/a45fe964-9989-11e8-b3f7-00163e4bf18a', u'id':
u'a45fe964-9989-11e8-b3f7-00163e4bf18a'}, u'href':
u'/ovirt-engine/api/hosts/fd0752d8-2d41-45b0-887a-0ffacbb8a237', u'spm':
{u'priority': 5, u'status': u'spm'}, u'id':
u'fd0752d8-2d41-45b0-887a-0ffacbb8a237', u'external_status': u'ok',
u'statistics': [], u'certificate': {u'organization': u'xxxxxxxx.com',
u'subject': u'O=xxxxxxxx.com,CN=host2.xxxxxxxx.com'}, u'nics': [],
u'iscsi': {u'initiator': u'iqn.1994-05.com.redhat:369c9881d24e'}, u'port':
54321, u'hardware_information': {u'serial_number': u'D9733W1',
u'product_name': u'PowerEdge R720', u'uuid':
u'4C4C4544-0039-3710-8033-C4C04F335731', u'supported_rng_sources':
[u'hwrng', u'random'], u'manufacturer': u'Dell Inc.'}, u'version':
{u'full_version': u'vdsm-4.20.43-1.el7', u'revision': 0, u'build': 43,
u'minor': 20, u'major': 4}, u'memory': 270258929664, u'ksm': {u'enabled':
False}, u'se_linux': {u'mode': u'permissive'}, u'type': u'ovirt_node',
u'storage_connection_extensions': [], u'status': u'up', u'tags': [],
u'katello_errata': [], u'external_network_provider_configurations': [],
u'ssh': {u'port': 22, u'fingerprint':
u'SHA256:cSgSzFFQaziGHDeKH1CIfXd8YbheGTaft3XBgyr4YgQ'}, u'address': u'
host2.xxxxxxxx.com', u'numa_nodes': [], u'device_passthrough': {u'enabled':
False}, u'unmanaged_networks': [], u'permissions': [], u'numa_supported':
True, u'libvirt_version': {u'full_version': u'libvirt-3.9.0-14.el7_5.8',
u'revision': 0, u'build': 0, u'minor': 9, u'major': 3},
u'power_management': {u'kdump_detection': True, u'enabled': True,
u'pm_proxies': [{u'type': u'cluster'}, {u'type': u'dc'}],
u'automatic_pm_enabled': True}, u'name': u'host2', u'devices': [],
u'summary': {u'active': 4, u'migrating': 0, u'total': 4},
u'auto_numa_status': u'enable', u'transparent_huge_pages': {u'enabled':
True}, u'network_attachments': [], u'os': {u'version': {u'full_version':
u'7 - 5.1804.5.el7.centos', u'major': 7}, u'type': u'RHEL',
u'custom_kernel_cmdline': u'', u'reported_kernel_cmdline':
u'BOOT_IMAGE=/ovirt-node-ng-4.2.7.1-0.20181114.0+1/vmlinuz-3.10.0-862.14.4.el7.x86_64
root=/dev/onn_host2/ovirt-node-ng-4.2.7.1-0.20181114.0+1 ro
crashkernel=auto rd.lvm.lv=onn_host2/ovirt-node-ng-4.2.7.1-0.20181114.0+1
rd.lvm.lv=onn_host2/swap rhgb quiet LANG=en_CA.UTF-8
img.bootid=ovirt-node-ng-4.2.7.1-0.20181114.0+1'}, u'cpu': {u'speed':
1200.0, u'name': u'Intel(R) Xeon(R) CPU E5-2690 0 @ 2.90GHz', u'topology':
{u'cores': 8, u'threads': 1, u'sockets': 2}}, u'kdump_status': u'disabled'})
TASK [oVirt.cluster-upgrade : Upgrade host]
**********************************************************************************************************************************************************************************************************************************************************************************
skipping: [localhost] => (item=vm1)
skipping: [localhost] => (item=vm2)
skipping: [localhost] => (item=vm3)
skipping: [localhost] => (item=vm4)
TASK [oVirt.cluster-upgrade : Create list of VM names which was shutted
down]
************************************************************************************************************************************************************************************************************************************************
skipping: [localhost]
TASK [oVirt.cluster-upgrade : include_tasks]
*********************************************************************************************************************************************************************************************************************************************************************************
included: /usr/share/ansible/roles/ovirt.cluster-upgrade/tasks/upgrade.yml
for localhost => (item={u'comment': u'', u'update_available': True,
u'protocol': u'stomp', u'affinity_labels': [], u'hooks': [],
u'max_scheduling_memory': 235034116096, u'cluster': {u'href':
u'/ovirt-engine/api/clusters/a45fe964-9989-11e8-b3f7-00163e4bf18a', u'id':
u'a45fe964-9989-11e8-b3f7-00163e4bf18a'}, u'href':
u'/ovirt-engine/api/hosts/771c67eb-56e6-4736-8c67-668502d4ecf5', u'spm':
{u'priority': 5, u'status': u'none'}, u'id':
u'771c67eb-56e6-4736-8c67-668502d4ecf5', u'external_status': u'ok',
u'statistics': [], u'certificate': {u'organization': u'xxxxxxxx.com',
u'subject': u'O=xxxxxxxx.com,CN=host0.xxxxxxxx.com'}, u'nics': [],
u'iscsi': {u'initiator': u'iqn.1994-05.com.redhat:ac5566ae1e1a'}, u'port':
54321, u'hardware_information': {u'serial_number': u'70833W1',
u'product_name': u'PowerEdge R720', u'uuid':
u'4C4C4544-0030-3810-8033-B7C04F335731', u'supported_rng_sources':
[u'hwrng', u'random'], u'manufacturer': u'Dell Inc.'}, u'version':
{u'full_version': u'vdsm-4.20.43-1.el7', u'revision': 0, u'build': 43,
u'minor': 20, u'major': 4}, u'memory': 270258929664, u'ksm': {u'enabled':
False}, u'se_linux': {u'mode': u'permissive'}, u'type': u'ovirt_node',
u'storage_connection_extensions': [], u'status': u'up', u'tags': [],
u'katello_errata': [], u'external_network_provider_configurations': [],
u'ssh': {u'port': 22, u'fingerprint':
u'SHA256:TiiMiaHi++kqhmurXdlQjtwbb3Q1oIBi5ab2alKujyI'}, u'address': u'
host0.xxxxxxxx.com', u'numa_nodes': [], u'device_passthrough': {u'enabled':
False}, u'unmanaged_networks': [], u'permissions': [], u'numa_supported':
True, u'libvirt_version': {u'full_version': u'libvirt-3.9.0-14.el7_5.8',
u'revision': 0, u'build': 0, u'minor': 9, u'major': 3},
u'power_management': {u'kdump_detection': True, u'enabled': True,
u'pm_proxies': [{u'type': u'cluster'}, {u'type': u'dc'}],
u'automatic_pm_enabled': True}, u'name': u'host0', u'devices': [],
u'summary': {u'active': 4, u'migrating': 0, u'total': 4},
u'auto_numa_status': u'enable', u'transparent_huge_pages': {u'enabled':
True}, u'network_attachments': [], u'os': {u'version': {u'full_version':
u'7 - 5.1804.5.el7.centos', u'major': 7}, u'type': u'RHEL',
u'custom_kernel_cmdline': u'', u'reported_kernel_cmdline':
u'BOOT_IMAGE=/ovirt-node-ng-4.2.7.1-0.20181114.0+1/vmlinuz-3.10.0-862.14.4.el7.x86_64
root=/dev/onn_host0/ovirt-node-ng-4.2.7.1-0.20181114.0+1 ro
crashkernel=auto rd.lvm.lv=onn_host0/swap
rd.lvm.lv=onn_host0/ovirt-node-ng-4.2.7.1-0.20181114.0+1
rhgb quiet LANG=en_CA.UTF-8
img.bootid=ovirt-node-ng-4.2.7.1-0.20181114.0+1'}, u'cpu': {u'speed':
1200.0, u'name': u'Intel(R) Xeon(R) CPU E5-2690 0 @ 2.90GHz', u'topology':
{u'cores': 8, u'threads': 1, u'sockets': 2}}, u'kdump_status': u'disabled'})
included: /usr/share/ansible/roles/ovirt.cluster-upgrade/tasks/upgrade.yml
for localhost => (item={u'comment': u'', u'update_available': True,
u'protocol': u'stomp', u'affinity_labels': [], u'hooks': [],
u'max_scheduling_memory': 250759610368, u'cluster': {u'href':
u'/ovirt-engine/api/clusters/a45fe964-9989-11e8-b3f7-00163e4bf18a', u'id':
u'a45fe964-9989-11e8-b3f7-00163e4bf18a'}, u'href':
u'/ovirt-engine/api/hosts/fb1e62d5-1dc1-4ccc-8b2b-cf48f7077d0d', u'spm':
{u'priority': 5, u'status': u'none'}, u'id':
u'fb1e62d5-1dc1-4ccc-8b2b-cf48f7077d0d', u'external_status': u'ok',
u'statistics': [], u'certificate': {u'organization': u'xxxxxxxx.com',
u'subject': u'O=xxxxxxxx.com,CN=host1.xxxxxxxx.com'}, u'nics': [],
u'iscsi': {u'initiator': u'iqn.1994-05.com.redhat:2f5ea98fcec4'}, u'port':
54321, u'hardware_information': {u'serial_number': u'FYRTGX1',
u'product_name': u'PowerEdge R720', u'uuid':
u'4C4C4544-0059-5210-8054-C6C04F475831', u'supported_rng_sources':
[u'hwrng', u'random'], u'manufacturer': u'Dell Inc.'}, u'version':
{u'full_version': u'vdsm-4.20.43-1.el7', u'revision': 0, u'build': 43,
u'minor': 20, u'major': 4}, u'memory': 270258929664, u'ksm': {u'enabled':
False}, u'se_linux': {u'mode': u'permissive'}, u'type': u'ovirt_node',
u'storage_connection_extensions': [], u'status': u'up', u'tags': [],
u'katello_errata': [], u'external_network_provider_configurations': [],
u'ssh': {u'port': 22, u'fingerprint':
u'SHA256:HGU2WhIlPfxgXHXHOE8IYT4/NGAQeUBRe0QA34qQ+nc'}, u'address': u'
host1.xxxxxxxx.com', u'numa_nodes': [], u'device_passthrough': {u'enabled':
False}, u'unmanaged_networks': [], u'permissions': [], u'numa_supported':
True, u'libvirt_version': {u'full_version': u'libvirt-3.9.0-14.el7_5.8',
u'revision': 0, u'build': 0, u'minor': 9, u'major': 3},
u'power_management': {u'kdump_detection': True, u'enabled': True,
u'pm_proxies': [{u'type': u'cluster'}, {u'type': u'dc'}],
u'automatic_pm_enabled': True}, u'name': u'host1', u'devices': [],
u'summary': {u'active': 4, u'migrating': 0, u'total': 4},
u'auto_numa_status': u'enable', u'transparent_huge_pages': {u'enabled':
True}, u'network_attachments': [], u'os': {u'version': {u'full_version':
u'7 - 5.1804.5.el7.centos', u'major': 7}, u'type': u'RHEL',
u'custom_kernel_cmdline': u'', u'reported_kernel_cmdline':
u'BOOT_IMAGE=/ovirt-node-ng-4.2.7.1-0.20181114.0+1/vmlinuz-3.10.0-862.14.4.el7.x86_64
root=/dev/onn_host1/ovirt-node-ng-4.2.7.1-0.20181114.0+1 ro
crashkernel=auto rd.lvm.lv=onn_host1/ovirt-node-ng-4.2.7.1-0.20181114.0+1
rd.lvm.lv=onn_host1/swap rhgb quiet LANG=en_CA.UTF-8
img.bootid=ovirt-node-ng-4.2.7.1-0.20181114.0+1'}, u'cpu': {u'speed':
1200.0, u'name': u'Intel(R) Xeon(R) CPU E5-2690 0 @ 2.90GHz', u'topology':
{u'cores': 8, u'threads': 1, u'sockets': 2}}, u'kdump_status': u'disabled'})
included: /usr/share/ansible/roles/ovirt.cluster-upgrade/tasks/upgrade.yml
for localhost => (item={u'comment': u'', u'update_available': True,
u'protocol': u'stomp', u'affinity_labels': [], u'hooks': [],
u'max_scheduling_memory': 232492367872, u'cluster': {u'href':
u'/ovirt-engine/api/clusters/a45fe964-9989-11e8-b3f7-00163e4bf18a', u'id':
u'a45fe964-9989-11e8-b3f7-00163e4bf18a'}, u'href':
u'/ovirt-engine/api/hosts/fd0752d8-2d41-45b0-887a-0ffacbb8a237', u'spm':
{u'priority': 5, u'status': u'spm'}, u'id':
u'fd0752d8-2d41-45b0-887a-0ffacbb8a237', u'external_status': u'ok',
u'statistics': [], u'certificate': {u'organization': u'xxxxxxxx.com',
u'subject': u'O=xxxxxxxx.com,CN=host2.xxxxxxxx.com'}, u'nics': [],
u'iscsi': {u'initiator': u'iqn.1994-05.com.redhat:369c9881d24e'}, u'port':
54321, u'hardware_information': {u'serial_number': u'D9733W1',
u'product_name': u'PowerEdge R720', u'uuid':
u'4C4C4544-0039-3710-8033-C4C04F335731', u'supported_rng_sources':
[u'hwrng', u'random'], u'manufacturer': u'Dell Inc.'}, u'version':
{u'full_version': u'vdsm-4.20.43-1.el7', u'revision': 0, u'build': 43,
u'minor': 20, u'major': 4}, u'memory': 270258929664, u'ksm': {u'enabled':
False}, u'se_linux': {u'mode': u'permissive'}, u'type': u'ovirt_node',
u'storage_connection_extensions': [], u'status': u'up', u'tags': [],
u'katello_errata': [], u'external_network_provider_configurations': [],
u'ssh': {u'port': 22, u'fingerprint':
u'SHA256:cSgSzFFQaziGHDeKH1CIfXd8YbheGTaft3XBgyr4YgQ'}, u'address': u'
host2.xxxxxxxx.com', u'numa_nodes': [], u'device_passthrough': {u'enabled':
False}, u'unmanaged_networks': [], u'permissions': [], u'numa_supported':
True, u'libvirt_version': {u'full_version': u'libvirt-3.9.0-14.el7_5.8',
u'revision': 0, u'build': 0, u'minor': 9, u'major': 3},
u'power_management': {u'kdump_detection': True, u'enabled': True,
u'pm_proxies': [{u'type': u'cluster'}, {u'type': u'dc'}],
u'automatic_pm_enabled': True}, u'name': u'host2', u'devices': [],
u'summary': {u'active': 4, u'migrating': 0, u'total': 4},
u'auto_numa_status': u'enable', u'transparent_huge_pages': {u'enabled':
True}, u'network_attachments': [], u'os': {u'version': {u'full_version':
u'7 - 5.1804.5.el7.centos', u'major': 7}, u'type': u'RHEL',
u'custom_kernel_cmdline': u'', u'reported_kernel_cmdline':
u'BOOT_IMAGE=/ovirt-node-ng-4.2.7.1-0.20181114.0+1/vmlinuz-3.10.0-862.14.4.el7.x86_64
root=/dev/onn_host2/ovirt-node-ng-4.2.7.1-0.20181114.0+1 ro
crashkernel=auto rd.lvm.lv=onn_host2/ovirt-node-ng-4.2.7.1-0.20181114.0+1
rd.lvm.lv=onn_host2/swap rhgb quiet LANG=en_CA.UTF-8
img.bootid=ovirt-node-ng-4.2.7.1-0.20181114.0+1'}, u'cpu': {u'speed':
1200.0, u'name': u'Intel(R) Xeon(R) CPU E5-2690 0 @ 2.90GHz', u'topology':
{u'cores': 8, u'threads': 1, u'sockets': 2}}, u'kdump_status': u'disabled'})
TASK [oVirt.cluster-upgrade : Upgrade host]
**********************************************************************************************************************************************************************************************************************************************************************************
skipping: [localhost] => (item=vm1)
skipping: [localhost] => (item=vm2)
skipping: [localhost] => (item=vm3)
TASK [oVirt.cluster-upgrade : Create list of VM names which was shutted
down]
************************************************************************************************************************************************************************************************************************************************
skipping: [localhost]
TASK [oVirt.cluster-upgrade : include_tasks]
*********************************************************************************************************************************************************************************************************************************************************************************
included: /usr/share/ansible/roles/ovirt.cluster-upgrade/tasks/upgrade.yml
for localhost => (item={u'comment': u'', u'update_available': True,
u'protocol': u'stomp', u'affinity_labels': [], u'hooks': [],
u'max_scheduling_memory': 235034116096, u'cluster': {u'href':
u'/ovirt-engine/api/clusters/a45fe964-9989-11e8-b3f7-00163e4bf18a', u'id':
u'a45fe964-9989-11e8-b3f7-00163e4bf18a'}, u'href':
u'/ovirt-engine/api/hosts/771c67eb-56e6-4736-8c67-668502d4ecf5', u'spm':
{u'priority': 5, u'status': u'none'}, u'id':
u'771c67eb-56e6-4736-8c67-668502d4ecf5', u'external_status': u'ok',
u'statistics': [], u'certificate': {u'organization': u'xxxxxxxx.com',
u'subject': u'O=xxxxxxxx.com,CN=host0.xxxxxxxx.com'}, u'nics': [],
u'iscsi': {u'initiator': u'iqn.1994-05.com.redhat:ac5566ae1e1a'}, u'port':
54321, u'hardware_information': {u'serial_number': u'70833W1',
u'product_name': u'PowerEdge R720', u'uuid':
u'4C4C4544-0030-3810-8033-B7C04F335731', u'supported_rng_sources':
[u'hwrng', u'random'], u'manufacturer': u'Dell Inc.'}, u'version':
{u'full_version': u'vdsm-4.20.43-1.el7', u'revision': 0, u'build': 43,
u'minor': 20, u'major': 4}, u'memory': 270258929664, u'ksm': {u'enabled':
False}, u'se_linux': {u'mode': u'permissive'}, u'type': u'ovirt_node',
u'storage_connection_extensions': [], u'status': u'up', u'tags': [],
u'katello_errata': [], u'external_network_provider_configurations': [],
u'ssh': {u'port': 22, u'fingerprint':
u'SHA256:TiiMiaHi++kqhmurXdlQjtwbb3Q1oIBi5ab2alKujyI'}, u'address': u'
host0.xxxxxxxx.com', u'numa_nodes': [], u'device_passthrough': {u'enabled':
False}, u'unmanaged_networks': [], u'permissions': [], u'numa_supported':
True, u'libvirt_version': {u'full_version': u'libvirt-3.9.0-14.el7_5.8',
u'revision': 0, u'build': 0, u'minor': 9, u'major': 3},
u'power_management': {u'kdump_detection': True, u'enabled': True,
u'pm_proxies': [{u'type': u'cluster'}, {u'type': u'dc'}],
u'automatic_pm_enabled': True}, u'name': u'host0', u'devices': [],
u'summary': {u'active': 4, u'migrating': 0, u'total': 4},
u'auto_numa_status': u'enable', u'transparent_huge_pages': {u'enabled':
True}, u'network_attachments': [], u'os': {u'version': {u'full_version':
u'7 - 5.1804.5.el7.centos', u'major': 7}, u'type': u'RHEL',
u'custom_kernel_cmdline': u'', u'reported_kernel_cmdline':
u'BOOT_IMAGE=/ovirt-node-ng-4.2.7.1-0.20181114.0+1/vmlinuz-3.10.0-862.14.4.el7.x86_64
root=/dev/onn_host0/ovirt-node-ng-4.2.7.1-0.20181114.0+1 ro
crashkernel=auto rd.lvm.lv=onn_host0/swap
rd.lvm.lv=onn_host0/ovirt-node-ng-4.2.7.1-0.20181114.0+1
rhgb quiet LANG=en_CA.UTF-8
img.bootid=ovirt-node-ng-4.2.7.1-0.20181114.0+1'}, u'cpu': {u'speed':
1200.0, u'name': u'Intel(R) Xeon(R) CPU E5-2690 0 @ 2.90GHz', u'topology':
{u'cores': 8, u'threads': 1, u'sockets': 2}}, u'kdump_status': u'disabled'})
included: /usr/share/ansible/roles/ovirt.cluster-upgrade/tasks/upgrade.yml
for localhost => (item={u'comment': u'', u'update_available': True,
u'protocol': u'stomp', u'affinity_labels': [], u'hooks': [],
u'max_scheduling_memory': 250759610368, u'cluster': {u'href':
u'/ovirt-engine/api/clusters/a45fe964-9989-11e8-b3f7-00163e4bf18a', u'id':
u'a45fe964-9989-11e8-b3f7-00163e4bf18a'}, u'href':
u'/ovirt-engine/api/hosts/fb1e62d5-1dc1-4ccc-8b2b-cf48f7077d0d', u'spm':
{u'priority': 5, u'status': u'none'}, u'id':
u'fb1e62d5-1dc1-4ccc-8b2b-cf48f7077d0d', u'external_status': u'ok',
u'statistics': [], u'certificate': {u'organization': u'xxxxxxxx.com',
u'subject': u'O=xxxxxxxx.com,CN=host1.xxxxxxxx.com'}, u'nics': [],
u'iscsi': {u'initiator': u'iqn.1994-05.com.redhat:2f5ea98fcec4'}, u'port':
54321, u'hardware_information': {u'serial_number': u'FYRTGX1',
u'product_name': u'PowerEdge R720', u'uuid':
u'4C4C4544-0059-5210-8054-C6C04F475831', u'supported_rng_sources':
[u'hwrng', u'random'], u'manufacturer': u'Dell Inc.'}, u'version':
{u'full_version': u'vdsm-4.20.43-1.el7', u'revision': 0, u'build': 43,
u'minor': 20, u'major': 4}, u'memory': 270258929664, u'ksm': {u'enabled':
False}, u'se_linux': {u'mode': u'permissive'}, u'type': u'ovirt_node',
u'storage_connection_extensions': [], u'status': u'up', u'tags': [],
u'katello_errata': [], u'external_network_provider_configurations': [],
u'ssh': {u'port': 22, u'fingerprint':
u'SHA256:HGU2WhIlPfxgXHXHOE8IYT4/NGAQeUBRe0QA34qQ+nc'}, u'address': u'
host1.xxxxxxxx.com', u'numa_nodes': [], u'device_passthrough': {u'enabled':
False}, u'unmanaged_networks': [], u'permissions': [], u'numa_supported':
True, u'libvirt_version': {u'full_version': u'libvirt-3.9.0-14.el7_5.8',
u'revision': 0, u'build': 0, u'minor': 9, u'major': 3},
u'power_management': {u'kdump_detection': True, u'enabled': True,
u'pm_proxies': [{u'type': u'cluster'}, {u'type': u'dc'}],
u'automatic_pm_enabled': True}, u'name': u'host1', u'devices': [],
u'summary': {u'active': 4, u'migrating': 0, u'total': 4},
u'auto_numa_status': u'enable', u'transparent_huge_pages': {u'enabled':
True}, u'network_attachments': [], u'os': {u'version': {u'full_version':
u'7 - 5.1804.5.el7.centos', u'major': 7}, u'type': u'RHEL',
u'custom_kernel_cmdline': u'', u'reported_kernel_cmdline':
u'BOOT_IMAGE=/ovirt-node-ng-4.2.7.1-0.20181114.0+1/vmlinuz-3.10.0-862.14.4.el7.x86_64
root=/dev/onn_host1/ovirt-node-ng-4.2.7.1-0.20181114.0+1 ro
crashkernel=auto rd.lvm.lv=onn_host1/ovirt-node-ng-4.2.7.1-0.20181114.0+1
rd.lvm.lv=onn_host1/swap rhgb quiet LANG=en_CA.UTF-8
img.bootid=ovirt-node-ng-4.2.7.1-0.20181114.0+1'}, u'cpu': {u'speed':
1200.0, u'name': u'Intel(R) Xeon(R) CPU E5-2690 0 @ 2.90GHz', u'topology':
{u'cores': 8, u'threads': 1, u'sockets': 2}}, u'kdump_status': u'disabled'})
included: /usr/share/ansible/roles/ovirt.cluster-upgrade/tasks/upgrade.yml
for localhost => (item={u'comment': u'', u'update_available': True,
u'protocol': u'stomp', u'affinity_labels': [], u'hooks': [],
u'max_scheduling_memory': 232492367872, u'cluster': {u'href':
u'/ovirt-engine/api/clusters/a45fe964-9989-11e8-b3f7-00163e4bf18a', u'id':
u'a45fe964-9989-11e8-b3f7-00163e4bf18a'}, u'href':
u'/ovirt-engine/api/hosts/fd0752d8-2d41-45b0-887a-0ffacbb8a237', u'spm':
{u'priority': 5, u'status': u'spm'}, u'id':
u'fd0752d8-2d41-45b0-887a-0ffacbb8a237', u'external_status': u'ok',
u'statistics': [], u'certificate': {u'organization': u'xxxxxxxx.com',
u'subject': u'O=xxxxxxxx.com,CN=host2.xxxxxxxx.com'}, u'nics': [],
u'iscsi': {u'initiator': u'iqn.1994-05.com.redhat:369c9881d24e'}, u'port':
54321, u'hardware_information': {u'serial_number': u'D9733W1',
u'product_name': u'PowerEdge R720', u'uuid':
u'4C4C4544-0039-3710-8033-C4C04F335731', u'supported_rng_sources':
[u'hwrng', u'random'], u'manufacturer': u'Dell Inc.'}, u'version':
{u'full_version': u'vdsm-4.20.43-1.el7', u'revision': 0, u'build': 43,
u'minor': 20, u'major': 4}, u'memory': 270258929664, u'ksm': {u'enabled':
False}, u'se_linux': {u'mode': u'permissive'}, u'type': u'ovirt_node',
u'storage_connection_extensions': [], u'status': u'up', u'tags': [],
u'katello_errata': [], u'external_network_provider_configurations': [],
u'ssh': {u'port': 22, u'fingerprint':
u'SHA256:cSgSzFFQaziGHDeKH1CIfXd8YbheGTaft3XBgyr4YgQ'}, u'address': u'
host2.xxxxxxxx.com', u'numa_nodes': [], u'device_passthrough': {u'enabled':
False}, u'unmanaged_networks': [], u'permissions': [], u'numa_supported':
True, u'libvirt_version': {u'full_version': u'libvirt-3.9.0-14.el7_5.8',
u'revision': 0, u'build': 0, u'minor': 9, u'major': 3},
u'power_management': {u'kdump_detection': True, u'enabled': True,
u'pm_proxies': [{u'type': u'cluster'}, {u'type': u'dc'}],
u'automatic_pm_enabled': True}, u'name': u'host2', u'devices': [],
u'summary': {u'active': 4, u'migrating': 0, u'total': 4},
u'auto_numa_status': u'enable', u'transparent_huge_pages': {u'enabled':
True}, u'network_attachments': [], u'os': {u'version': {u'full_version':
u'7 - 5.1804.5.el7.centos', u'major': 7}, u'type': u'RHEL',
u'custom_kernel_cmdline': u'', u'reported_kernel_cmdline':
u'BOOT_IMAGE=/ovirt-node-ng-4.2.7.1-0.20181114.0+1/vmlinuz-3.10.0-862.14.4.el7.x86_64
root=/dev/onn_host2/ovirt-node-ng-4.2.7.1-0.20181114.0+1 ro
crashkernel=auto rd.lvm.lv=onn_host2/ovirt-node-ng-4.2.7.1-0.20181114.0+1
rd.lvm.lv=onn_host2/swap rhgb quiet LANG=en_CA.UTF-8
img.bootid=ovirt-node-ng-4.2.7.1-0.20181114.0+1'}, u'cpu': {u'speed':
1200.0, u'name': u'Intel(R) Xeon(R) CPU E5-2690 0 @ 2.90GHz', u'topology':
{u'cores': 8, u'threads': 1, u'sockets': 2}}, u'kdump_status': u'disabled'})
TASK [oVirt.cluster-upgrade : Upgrade host]
**********************************************************************************************************************************************************************************************************************************************************************************
changed: [localhost]
TASK [oVirt.cluster-upgrade : Upgrade host]
**********************************************************************************************************************************************************************************************************************************************************************************
changed: [localhost]
TASK [oVirt.cluster-upgrade : Upgrade host]
**********************************************************************************************************************************************************************************************************************************************************************************
changed: [localhost]
TASK [oVirt.cluster-upgrade : Set original cluster policy]
*******************************************************************************************************************************************************************************************************************************************************************
changed: [localhost]
TASK [oVirt.cluster-upgrade : Start again stopped VMs]
***********************************************************************************************************************************************************************************************************************************************************************
TASK [oVirt.cluster-upgrade : Start again pin to host VMs]
*******************************************************************************************************************************************************************************************************************************************************************
TASK [oVirt.cluster-upgrade : Logout from oVirt]
*****************************************************************************************************************************************************************************************************************************************************************************
ok: [localhost]
PLAY RECAP
*******************************************************************************************************************************************************************************************************************************************************************************************************************
localhost : ok=21 changed=5 unreachable=0 failed=0
6 years, 3 months
Unable to get the proper console of vm
by Shikhar Verma
Hi,
I have created the virtual machine from ovirt manager but when I am trying to get the console of the vm to do the installation of it but it is only giving two two line and even i have tried as run once and selected CD ROM as first priority and attached the ISO of centos7
SeaBIOS (version 1.11.0-2.e17)
Machine UUID -------
Also, from manager, newly launched vm is showing green..
And from the host machine, it is showing this error
Jan 21 19:23:24 servera libvirtd: 2019-01-21 13:53:24.286+0000: 12800: error : qemuDomainAgentAvailable:9133 : Guest agent is not responding: QEMU guest agent is not connected
I am using the latest version version of ovirt-engine & host as well.
Please respond.
Thanks
Shikhar
6 years, 3 months
Ovirt 4.2.8 allows to remove a gluster volume without detaching the storage domain
by Strahil Nikolov
Hello Community,
As I'm still experimenting with my ovirt lab , I have managed somehow to remove my gluster volume ('gluster volume list' confirms it) whithout detaching the storage domain.
This sounds to me as bug, am I right ?
Steps to reproduce:1. Create a replica 3 arbiter 1 gluster volume2. Create a storage domain of it3. Go to Volumes and select the name of the volume4. Press remove and confirm . The tasks fails , but the volume is now gone in gluster .
I guess , I have to do some cleanup in the DB in order to fix that.
Best Regards,Strahil Nikolov
6 years, 3 months
How to replace vMware infrastructure with oVirt
by Mannish Kumar
Hi,
I have two Esxi hosts managed by VMware vCenter Server. I want to create a similar infrastructure with oVirt. I know that oVirt is similar to VMware vCenter Server but not sure what to replace the Esxi hosts with in oVirt Environment.
I am looking to build oVirt with Self-Hosted Engine.It would be great help if someone could help me to build this.
6 years, 3 months
Re: [Gluster-users] Отн: Gluster performance issues - need advise
by Strahil Nikolov
Dear Darrell,
I found the issue and now I can reach the maximum of the network with a fuse client.Here is a short overview:1. I noticed that working with a new gluster volume is reaching my network speed - I was quite excited.2. Then I have destroyed my gluster volume and created a new one and started adding features from ovirt
Once I have added features.shard on -> I hit the same performance from before. Increasing the shard size to 16MB didn't help at all.
For my case where I have 2 Virtualization hosts with single data gluster volume - sharding is not neccessary, but for larger setups it will be a problem.
As this looks to me as a bug - can someone tell me where I can report it ?
Thanks to all who guided me in this journey of GlusterFS ! I have learned so much , as my prior knowledge was only in Ceph.
Best Regards,Strahil Nikolov
В четвъртък, 24 януари 2019 г., 17:53:50 ч. Гринуич+2, Darrell Budic <budic(a)onholyground.com> написа:
Strahil-
The fuse client is what it is, it’s limited by operating in user land and waiting for the gluster servers to acknowledge all the writes. I noted you're using ovirt, you should look into enabling the libgfapi engine setting to run your VMs with libgf natively. You can’t test directly from the host with that, but you can run your tests inside the VMs. I saw significant throughput and latency improvements that way. It’s still somewhat beta, so you’ll probably need to search the overt-users mailing list to find info on enabling it.
Good luck!
On Jan 24, 2019, at 4:32 AM, Strahil Nikolov <hunter86_bg(a)yahoo.com> wrote:
Dear Amar, Community,
it seems the issue is in the fuse client itself.
Here is the latest update:1. I have added the following:server.event-threads: 4
client.event-threads: 4
performance.stat-prefetch: onperformance.strict-o-direct: off
Results: no change
2. Allowed nfs and connected ovirt1 to the gluster volume:nfs.disable: off
Results: Drastic improvement in performance as follows:
[root@ovirt1 data]# dd if=/dev/zero of=largeio bs=1M count=5000 status=progress
5000+0 records in
5000+0 records out
5242880000 bytes (5.2 GB) copied, 53.0443 s, 98.8 MB/s
So I would be happy if anyone guide me in order to fix the situation as the fuse client is the best way to use glusterfs, and it seems the glusterfs-server is not the guilty one.
Thanks in advance for your guidance.I have learned so much.
Best Regards,Strahil Nikolov
От: Strahil <hunter86_bg(a)yahoo.com>
До: Amar Tumballi Suryanarayan <atumball(a)redhat.com>
Копие: Gluster-users <gluster-users(a)gluster.org>
Изпратен: сряда, 23 януари 2019 г. 18:44
Тема: Re: [Gluster-users] Gluster performance issues - need advise
Dear Amar,
Thanks for your email.
Actually my concerns were on both topics.Would you recommend any perf options that will be suitable ?
After mentioning the network usage, I just checked it and it seems duringthe test session, ovirt1 (both client and host) is using no more than 455Mbit/s which is half the network bandwidth.
I'm still in the middle of nowhere, so any ideas are welcome.
Best Regards,Strahil Nikolov
On Jan 23, 2019 17:49, Amar Tumballi Suryanarayan <atumball(a)redhat.com> wrote:
I didn't understand the issue properly. Mostly I missed something.
Are you concerned the performance is 49MB/s with and without perf options? or are you expecting it to be 123MB/s as over the n/w you get that speed?
If it is the first problem, then you are actually having 'performance.write-behind on' in both options, and it is the only perf xlator which comes into action during the test you ran.
If it is the second, then please be informed that gluster does client side replication, which means, n/w would be split in half for write operations (like write(), creat() etc), so the number you are getting is almost the maximum with 1GbE.
Regards,Amar
On Wed, Jan 23, 2019 at 8:38 PM Strahil Nikolov <hunter86_bg(a)yahoo.com> wrote:
Hello Community,
recently I have built a new lab based on oVirt and CentOS 7.
During deployment I had some hicups, but now the engine is up and running - but gluster is causing me trouble.
Symptoms: Slow VM install from DVD, poor write performance. The latter has been tested via:
dd if=/dev/zero of=/rhev/data-center/mnt/glusterSD/ovirt1.localdomain:_data bs=1M count=1000 status=progress
The reported speed is 60MB/s which is way too low for my setup.
My lab design:
https://drive.google.com/file/d/1SiW21ASPXHRAEuE_jZ50R3FoO-NcnFqT/view?us...
Gluster version is 3.12.15
So far I have done:
1. Added 'server.allow-insecure on' (with 'option rpc-auth-allow-insecure on' in glusterd.vol)
Volume info after that change:
Volume Name: data
Type: Replicate
Volume ID: 9b06a1e9-8102-4cd7-bc56-84960a1efaa2
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: ovirt1.localdomain:/gluster_bricks/data/data
Brick2: ovirt2.localdomain:/gluster_bricks/data/data
Brick3: ovirt3.localdomain:/gluster_bricks/data/data (arbiter)
Options Reconfigured:
performance.client-io-threads: off
nfs.disable: on
transport.address-family: inet
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.low-prio-threads: 32
network.remote-dio: off
cluster.eager-lock: enable
cluster.quorum-type: auto
cluster.server-quorum-type: server
cluster.data-self-heal-algorithm: full
cluster.locking-scheme: granular
cluster.shd-max-threads: 8
cluster.shd-wait-qlength: 10000
features.shard: on
user.cifs: off
storage.owner-uid: 36
storage.owner-gid: 36
network.ping-timeout: 30
performance.strict-o-direct: on
cluster.granular-entry-heal: enable
server.allow-insecure: on
Seems no positive or negative effect so far.
2. Tested with tmpfs on all bricks -> ovirt1 mounted gluster volume -> max 60MB/s (bs=1M without 'oflag=direct')
[root@ovirt1 data]# dd if=/dev/zero of=large_io bs=1M count=4000 status=progress
4177526784 bytes (4.2 GB) copied, 70.843409 s, 59.0 MB/s
4000+0 records in
4000+0 records out
4194304000 bytes (4.2 GB) copied, 71.1407 s, 59.0 MB/s
[root@ovirt1 data]# rm -f large_io
[root@ovirt1 data]# gluster volume profile data info
Brick: ovirt1.localdomain:/gluster_bricks/data/data
---------------------------------------------------
Cumulative Stats:
Block Size: 131072b+
No. of Reads: 8
No. of Writes: 44968
%-latency Avg-latency Min-Latency Max-Latency No. of calls Fop
--------- ----------- ----------- ----------- ------------ ----
0.00 0.00 us 0.00 us 0.00 us 3 FORGET
0.00 0.00 us 0.00 us 0.00 us 35 RELEASE
0.00 0.00 us 0.00 us 0.00 us 28 RELEASEDIR
0.00 78.00 us 78.00 us 78.00 us 1 FSTAT
0.00 35.67 us 26.00 us 73.00 us 6 FLUSH
0.00 324.00 us 324.00 us 324.00 us 1 XATTROP
0.00 45.80 us 38.00 us 54.00 us 10 STAT
0.00 227.67 us 216.00 us 242.00 us 3 CREATE
0.00 113.38 us 68.00 us 381.00 us 8 READ
0.00 39.82 us 1.00 us 148.00 us 28 OPENDIR
0.00 67.54 us 10.00 us 283.00 us 24 GETXATTR
0.00 59.97 us 45.00 us 113.00 us 32 OPEN
0.00 24.41 us 13.00 us 89.00 us 161 INODELK
0.00 43.43 us 28.00 us 214.00 us 93 STATFS
0.00 246.35 us 11.00 us 1155.00 us 20 READDIR
0.00 283.00 us 233.00 us 353.00 us 18 READDIRP
0.00 153.23 us 122.00 us 259.00 us 87 MKNOD
0.01 99.77 us 10.00 us 258.00 us 442 LOOKUP
0.31 49.22 us 27.00 us 540.00 us 45620 FXATTROP
0.77 124.24 us 87.00 us 604.00 us 44968 WRITE
0.93 15767.71 us 15.00 us 305833.00 us 431 ENTRYLK
1.99 160711.39 us 3332.00 us 406037.00 us 90 UNLINK
96.00 5167.82 us 18.00 us 55972.00 us 135349 FINODELK
Duration: 380 seconds
Data Read: 1048576 bytes
Data Written: 5894045696 bytes
Interval 0 Stats:
Block Size: 131072b+
No. of Reads: 8
No. of Writes: 44968
%-latency Avg-latency Min-Latency Max-Latency No. of calls Fop
--------- ----------- ----------- ----------- ------------ ----
0.00 0.00 us 0.00 us 0.00 us 3 FORGET
0.00 0.00 us 0.00 us 0.00 us 35 RELEASE
0.00 0.00 us 0.00 us 0.00 us 28 RELEASEDIR
0.00 78.00 us 78.00 us 78.00 us 1 FSTAT
0.00 35.67 us 26.00 us 73.00 us 6 FLUSH
0.00 324.00 us 324.00 us 324.00 us 1 XATTROP
0.00 45.80 us 38.00 us 54.00 us 10 STAT
0.00 227.67 us 216.00 us 242.00 us 3 CREATE
0.00 113.38 us 68.00 us 381.00 us 8 READ
0.00 39.82 us 1.00 us 148.00 us 28 OPENDIR
0.00 67.54 us 10.00 us 283.00 us 24 GETXATTR
0.00 59.97 us 45.00 us 113.00 us 32 OPEN
0.00 24.41 us 13.00 us 89.00 us 161 INODELK
0.00 43.43 us 28.00 us 214.00 us 93 STATFS
0.00 246.35 us 11.00 us 1155.00 us 20 READDIR
0.00 283.00 us 233.00 us 353.00 us 18 READDIRP
0.00 153.23 us 122.00 us 259.00 us 87 MKNOD
0.01 99.77 us 10.00 us 258.00 us 442 LOOKUP
0.31 49.22 us 27.00 us 540.00 us 45620 FXATTROP
0.77 124.24 us 87.00 us 604.00 us 44968 WRITE
0.93 15767.71 us 15.00 us 305833.00 us 431 ENTRYLK
1.99 160711.39 us 3332.00 us 406037.00 us 90 UNLINK
96.00 5167.82 us 18.00 us 55972.00 us 135349 FINODELK
Duration: 380 seconds
Data Read: 1048576 bytes
Data Written: 5894045696 bytes
Brick: ovirt3.localdomain:/gluster_bricks/data/data
---------------------------------------------------
Cumulative Stats:
Block Size: 1b+
No. of Reads: 0
No. of Writes: 39328
%-latency Avg-latency Min-Latency Max-Latency No. of calls Fop
--------- ----------- ----------- ----------- ------------ ----
0.00 0.00 us 0.00 us 0.00 us 2 FORGET
0.00 0.00 us 0.00 us 0.00 us 12 RELEASE
0.00 0.00 us 0.00 us 0.00 us 17 RELEASEDIR
0.00 101.00 us 101.00 us 101.00 us 1 FSTAT
0.00 51.50 us 20.00 us 81.00 us 4 FLUSH
0.01 219.50 us 188.00 us 251.00 us 2 CREATE
0.01 43.45 us 11.00 us 90.00 us 11 GETXATTR
0.01 62.30 us 38.00 us 119.00 us 10 OPEN
0.01 50.59 us 1.00 us 102.00 us 17 OPENDIR
0.01 24.60 us 12.00 us 64.00 us 40 INODELK
0.02 176.30 us 10.00 us 765.00 us 10 READDIR
0.07 63.08 us 39.00 us 133.00 us 78 UNLINK
0.13 27.35 us 10.00 us 91.00 us 333 ENTRYLK
0.13 126.89 us 99.00 us 179.00 us 76 MKNOD
0.42 116.70 us 8.00 us 8661.00 us 261 LOOKUP
28.73 51.79 us 22.00 us 2574.00 us 39822 FXATTROP
29.52 53.87 us 16.00 us 3290.00 us 39328 WRITE
40.92 24.71 us 10.00 us 3224.00 us 118864 FINODELK
Duration: 189 seconds
Data Read: 0 bytes
Data Written: 39328 bytes
Interval 0 Stats:
Block Size: 1b+
No. of Reads: 0
No. of Writes: 39328
%-latency Avg-latency Min-Latency Max-Latency No. of calls Fop
--------- ----------- ----------- ----------- ------------ ----
0.00 0.00 us 0.00 us 0.00 us 2 FORGET
0.00 0.00 us 0.00 us 0.00 us 12 RELEASE
0.00 0.00 us 0.00 us 0.00 us 17 RELEASEDIR
0.00 101.00 us 101.00 us 101.00 us 1 FSTAT
0.00 51.50 us 20.00 us 81.00 us 4 FLUSH
0.01 219.50 us 188.00 us 251.00 us 2 CREATE
0.01 43.45 us 11.00 us 90.00 us 11 GETXATTR
0.01 62.30 us 38.00 us 119.00 us 10 OPEN
0.01 50.59 us 1.00 us 102.00 us 17 OPENDIR
0.01 24.60 us 12.00 us 64.00 us 40 INODELK
0.02 176.30 us 10.00 us 765.00 us 10 READDIR
0.07 63.08 us 39.00 us 133.00 us 78 UNLINK
0.13 27.35 us 10.00 us 91.00 us 333 ENTRYLK
0.13 126.89 us 99.00 us 179.00 us 76 MKNOD
0.42 116.70 us 8.00 us 8661.00 us 261 LOOKUP
28.73 51.79 us 22.00 us 2574.00 us 39822 FXATTROP
29.52 53.87 us 16.00 us 3290.00 us 39328 WRITE
40.92 24.71 us 10.00 us 3224.00 us 118864 FINODELK
Duration: 189 seconds
Data Read: 0 bytes
Data Written: 39328 bytes
Brick: ovirt2.localdomain:/gluster_bricks/data/data
---------------------------------------------------
Cumulative Stats:
Block Size: 512b+ 131072b+
No. of Reads: 0 0
No. of Writes: 36 76758
%-latency Avg-latency Min-Latency Max-Latency No. of calls Fop
--------- ----------- ----------- ----------- ------------ ----
0.00 0.00 us 0.00 us 0.00 us 6 FORGET
0.00 0.00 us 0.00 us 0.00 us 87 RELEASE
0.00 0.00 us 0.00 us 0.00 us 96 RELEASEDIR
0.00 100.50 us 80.00 us 121.00 us 2 REMOVEXATTR
0.00 101.00 us 101.00 us 101.00 us 2 SETXATTR
0.00 36.18 us 22.00 us 62.00 us 11 FLUSH
0.00 57.44 us 42.00 us 77.00 us 9 FTRUNCATE
0.00 82.56 us 59.00 us 138.00 us 9 FSTAT
0.00 89.42 us 67.00 us 161.00 us 12 SETATTR
0.00 272.40 us 235.00 us 296.00 us 5 CREATE
0.01 154.28 us 88.00 us 320.00 us 18 XATTROP
0.01 45.29 us 1.00 us 319.00 us 96 OPENDIR
0.01 86.69 us 30.00 us 379.00 us 62 STAT
0.01 64.30 us 47.00 us 169.00 us 84 OPEN
0.02 107.34 us 23.00 us 273.00 us 73 READDIRP
0.02 4688.00 us 86.00 us 9290.00 us 2 TRUNCATE
0.02 59.29 us 13.00 us 394.00 us 165 GETXATTR
0.03 128.51 us 27.00 us 338.00 us 96 FSYNC
0.03 240.75 us 14.00 us 1943.00 us 52 READDIR
0.04 65.59 us 26.00 us 293.00 us 279 STATFS
0.06 180.77 us 118.00 us 306.00 us 148 MKNOD
0.14 37.98 us 17.00 us 192.00 us 1598 INODELK
0.67 91.68 us 12.00 us 1141.00 us 3186 LOOKUP
10.10 55.92 us 28.00 us 1658.00 us 78608 FXATTROP
11.89 6814.76 us 18.00 us 301246.00 us 760 ENTRYLK
19.44 36.55 us 14.00 us 2353.00 us 231535 FINODELK
25.21 142.92 us 62.00 us 593.00 us 76794 WRITE
32.28 91283.68 us 28.00 us 316658.00 us 154 UNLINK
Duration: 1206 seconds
Data Read: 0 bytes
Data Written: 10060843008 bytes
Interval 0 Stats:
Block Size: 512b+ 131072b+
No. of Reads: 0 0
No. of Writes: 36 76758
%-latency Avg-latency Min-Latency Max-Latency No. of calls Fop
--------- ----------- ----------- ----------- ------------ ----
0.00 0.00 us 0.00 us 0.00 us 6 FORGET
0.00 0.00 us 0.00 us 0.00 us 87 RELEASE
0.00 0.00 us 0.00 us 0.00 us 96 RELEASEDIR
0.00 100.50 us 80.00 us 121.00 us 2 REMOVEXATTR
0.00 101.00 us 101.00 us 101.00 us 2 SETXATTR
0.00 36.18 us 22.00 us 62.00 us 11 FLUSH
0.00 57.44 us 42.00 us 77.00 us 9 FTRUNCATE
0.00 82.56 us 59.00 us 138.00 us 9 FSTAT
0.00 89.42 us 67.00 us 161.00 us 12 SETATTR
0.00 272.40 us 235.00 us 296.00 us 5 CREATE
0.01 154.28 us 88.00 us 320.00 us 18 XATTROP
0.01 45.29 us 1.00 us 319.00 us 96 OPENDIR
0.01 86.69 us 30.00 us 379.00 us 62 STAT
0.01 64.30 us 47.00 us 169.00 us 84 OPEN
0.02 107.34 us 23.00 us 273.00 us 73 READDIRP
0.02 4688.00 us 86.00 us 9290.00 us 2 TRUNCATE
0.02 59.29 us 13.00 us 394.00 us 165 GETXATTR
0.03 128.51 us 27.00 us 338.00 us 96 FSYNC
0.03 240.75 us 14.00 us 1943.00 us 52 READDIR
0.04 65.59 us 26.00 us 293.00 us 279 STATFS
0.06 180.77 us 118.00 us 306.00 us 148 MKNOD
0.14 37.98 us 17.00 us 192.00 us 1598 INODELK
0.67 91.66 us 12.00 us 1141.00 us 3186 LOOKUP
10.10 55.92 us 28.00 us 1658.00 us 78608 FXATTROP
11.89 6814.76 us 18.00 us 301246.00 us 760 ENTRYLK
19.44 36.55 us 14.00 us 2353.00 us 231535 FINODELK
25.21 142.92 us 62.00 us 593.00 us 76794 WRITE
32.28 91283.68 us 28.00 us 316658.00 us 154 UNLINK
Duration: 1206 seconds
Data Read: 0 bytes
Data Written: 10060843008 bytes
This indicates to me that it's not a problem in Disk/LVM/FileSystem layout.
Most probably I haven't created the volume properly or some option/feature is disabled ?!?
Network shows OK for a gigabit:
[root@ovirt1 data]# dd if=/dev/zero status=progress | nc ovirt2 9999
3569227264 bytes (3.6 GB) copied, 29.001052 s, 123 MB/s^C
7180980+0 records in
7180979+0 records out
3676661248 bytes (3.7 GB) copied, 29.8739 s, 123 MB/s
I'm looking for any help... you can share your volume info also.
Thanks in advance.
Best Regards,
Strahil Nikolov
_______________________________________________
Gluster-users mailing list
Gluster-users(a)gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users
--
Amar Tumballi (amarts)
Dear Amar,
Thanks for your email.
Actually my concerns were on both topics.
Would you recommend any perf options that will be suitable ?
After mentioning the network usage, I just checked it and it seems duringthe test session, ovirt1 (both client and host) is using no more than 455Mbit/s which is half the network bandwidth.
I'm still in the middle of nowhere, so any ideas are welcome.
Best Regards,
Strahil Nikolov
On Jan 23, 2019 17:49, Amar Tumballi Suryanarayan <atumball(a)redhat.com> wrote:
>
> I didn't understand the issue properly. Mostly I missed something.
>
> Are you concerned the performance is 49MB/s with and without perf options? or are you expecting it to be 123MB/s as over the n/w you get that speed?
>
> If it is the first problem, then you are actually having 'performance.write-behind on' in both options, and it is the only perf xlator which comes into action during the test you ran.
>
> If it is the second, then please be informed that gluster does client side replication, which means, n/w would be split in half for write operations (like write(), creat() etc), so the number you are getting is almost the maximum with 1GbE.
>
> Regards,
> Amar
>
> On Wed, Jan 23, 2019 at 8:38 PM Strahil Nikolov <hunter86_bg(a)yahoo.com> wrote:
>>
>> Hello Community,
>>
>> recently I have built a new lab based on oVirt and CentOS 7.
>> During deployment I had some hicups, but now the engine is up and running - but gluster is causing me trouble.
>>
>> Symptoms: Slow VM install from DVD, poor write performance. The latter has been tested via:
>> dd if=/dev/zero of=/rhev/data-center/mnt/glusterSD/ovirt1.localdomain:_data bs=1M count=1000 status=progress
>>
>> The reported speed is 60MB/s which is way too low for my setup.
>>
>> My lab design:
>> https://drive.google.com/file/d/1SiW21ASPXHRAEuE_jZ50R3FoO-NcnFqT/view?us...
>> Gluster version is 3.12.15
>>
>> So far I have done:
>>
>> 1. Added 'server.allow-insecure on' (with 'option rpc-auth-allow-insecure on' in glusterd.vol)
>> Volume info after that change:
>>
>> Volume Name: data
>> Type: Replicate
>> Volume ID: 9b06a1e9-8102-4cd7-bc56-84960a1efaa2
>> Status: Started
>> Snapshot Count: 0
>> Number of Bricks: 1 x (2 + 1) = 3
>> Transport-type: tcp
>> Bricks:
>> Brick1: ovirt1.localdomain:/gluster_bricks/data/data
>> Brick2: ovirt2.localdomain:/gluster_bricks/data/data
>> Brick3: ovirt3.localdomain:/gluster_bricks/data/data (arbiter)
>> Options Reconfigured:
>> performance.client-io-threads: off
>> nfs.disable: on
>> transport.address-family: inet
>> performance.quick-read: off
>> performance.read-ahead: off
>> performance.io-cache: off
>> performance.low-prio-threads: 32
>> network.remote-dio: off
>> cluster.eager-lock: enable
>> cluster.quorum-type: auto
>> cluster.server-quorum-type: server
>> cluster.data-self-heal-algorithm: full
>> cluster.locking-scheme: granular
>> cluster.shd-max-threads: 8
>> cluster.shd-wait-qlength: 10000
>> features.shard: on
>> user.cifs: off
>> storage.owner-uid: 36
>> storage.owner-gid: 36
>> network.ping-timeout: 30
>> performance.strict-o-direct: on
>> cluster.granular-entry-heal: enable
>> server.allow-insecure: on
>>
>> Seems no positive or negative effect so far.
>>
>> 2. Tested with tmpfs on all bricks -> ovirt1 mounted gluster volume -> max 60MB/s (bs=1M without 'oflag=direct')
>>
>>
>> [root@ovirt1 data]# dd if=/dev/zero of=large_io bs=1M count=4000 status=progress
>> 4177526784 bytes (4.2 GB) copied, 70.843409 s, 59.0 MB/s
>> 4000+0 records in
>> 4000+0 records out
>> 4194304000 bytes (4.2 GB) copied, 71.1407 s, 59.0 MB/s
>> [root@ovirt1 data]# rm -f large_io
>> [root@ovirt1 data]# gluster volume profile data info
>> Brick: ovirt1.localdomain:/gluster_bricks/data/data
>> ---------------------------------------------------
>> Cumulative Stats:
>> Block Size: 131072b+
>> No. of Reads: 8
>> No. of Writes: 44968
>> %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop
>> --------- ----------- ----------- ----------- ------------ ----
>> 0.00 0.00 us 0.00 us 0.00 us 3 FORGET
>> 0.00 0.00 us 0.00 us 0.00 us 35 RELEASE
>> 0.00 0.00 us 0.00 us 0.00 us 28 RELEASEDIR
>> 0.00 78.00 us 78.00 us 78.00 us 1 FSTAT
>> 0.00 35.67 us 26.00 us 73.00 us 6 FLUSH
>> 0.00 324.00 us 324.00 us 324.00 us 1 XATTROP
>> 0.00 45.80 us 38.00 us 54.00 us 10 STAT
>> 0.00 227.67 us 216.00 us 242.00 us 3 CREATE
>> 0.00 113.38 us 68.00 us 381.00 us 8 READ
>> 0.00 39.82 us 1.00 us 148.00 us 28 OPENDIR
>> 0.00 67.54 us 10.00 us 283.00 us 24 GETXATTR
>> 0.00 59.97 us 45.00 us 113.00 us 32 OPEN
>> 0.00 24.41 us 13.00 us 89.00 us 161 INODELK
>> 0.00 43.43 us 28.00 us 214.00 us 93 STATFS
>> 0.00 246.35 us 11.00 us 1155.00 us 20 READDIR
>> 0.00 283.00 us 233.00 us 353.00 us 18 READDIRP
>> 0.00 153.23 us 122.00 us 259.00 us 87 MKNOD
>> 0.01 99.77 us 10.00 us 258.00 us 442 LOOKUP
>> 0.31 49.22 us 27.00 us 540.00 us 45620 FXATTROP
>> 0.77 124.24 us 87.00 us 604.00 us 44968 WRITE
>> 0.93 15767.71 us 15.00 us 305833.00 us 431 ENTRYLK
>> 1.99 160711.39 us 3332.00 us 406037.00 us 90 UNLINK
>> 96.00 5167.82 us 18.00 us 55972.00 us 135349 FINODELK
>>
>> Duration: 380 seconds
>> Data Read: 1048576 bytes
>> Data Written: 5894045696 bytes
>>
>> Interval 0 Stats:
>> Block Size: 131072b+
>> No. of Reads: 8
>> No. of Writes: 44968
>> %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop
>> --------- ----------- ----------- ----------- ------------ ----
>> 0.00 0.00 us 0.00 us 0.00 us 3 FORGET
>> 0.00 0.00 us 0.00 us 0.00 us 35 RELEASE
>> 0.00 0.00 us 0.00 us 0.00 us 28 RELEASEDIR
>> 0.00 78.00 us 78.00 us 78.00 us 1 FSTAT
>> 0.00 35.67 us 26.00 us 73.00 us 6 FLUSH
>> 0.00 324.00 us 324.00 us 324.00 us 1 XATTROP
>> 0.00 45.80 us 38.00 us 54.00 us 10 STAT
>> 0.00 227.67 us 216.00 us 242.00 us 3 CREATE
>> 0.00 113.38 us 68.00 us 381.00 us 8 READ
>> 0.00 39.82 us 1.00 us 148.00 us 28 OPENDIR
>> 0.00 67.54 us 10.00 us 283.00 us 24 GETXATTR
>> 0.00 59.97 us 45.00 us 113.00 us 32 OPEN
>> 0.00 24.41 us 13.00 us 89.00 us 161 INODELK
>> 0.00 43.43 us 28.00 us 214.00 us 93 STATFS
>> 0.00 246.35 us 11.00 us 1155.00 us 20 READDIR
>> 0.00 283.00 us 233.00 us 353.00 us 18 READDIRP
>> 0.00 153.23 us 122.00 us 259.00 us 87 MKNOD
>> 0.01 99.77 us 10.00 us 258.00 us 442 LOOKUP
>> 0.31 49.22 us 27.00 us 540.00 us 45620 FXATTROP
>> 0.77 124.24 us 87.00 us 604.00 us 44968 WRITE
>> 0.93 15767.71 us 15.00 us 305833.00 us 431 ENTRYLK
>> 1.99 160711.39 us 3332.00 us 406037.00 us 90 UNLINK
>> 96.00 5167.82 us 18.00 us 55972.00 us 135349 FINODELK
>>
>> Duration: 380 seconds
>> Data Read: 1048576 bytes
>> Data Written: 5894045696 bytes
>>
>> Brick: ovirt3.localdomain:/gluster_bricks/data/data
>> ---------------------------------------------------
>> Cumulative Stats:
>> Block Size: 1b+
>> No. of Reads: 0
>> No. of Writes: 39328
>> %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop
>> --------- ----------- ----------- ----------- ------------ ----
>> 0.00 0.00 us 0.00 us 0.00 us 2 FORGET
>> 0.00 0.00 us 0.00 us 0.00 us 12 RELEASE
>> 0.00 0.00 us 0.00 us 0.00 us 17 RELEASEDIR
>> 0.00 101.00 us 101.00 us 101.00 us 1 FSTAT
>> 0.00 51.50 us 20.00 us 81.00 us 4 FLUSH
>> 0.01 219.50 us 188.00 us 251.00 us 2 CREATE
>> 0.01 43.45 us 11.00 us 90.00 us 11 GETXATTR
>> 0.01 62.30 us 38.00 us 119.00 us 10 OPEN
>> 0.01 50.59 us 1.00 us 102.00 us 17 OPENDIR
>> 0.01 24.60 us 12.00 us 64.00 us 40 INODELK
>> 0.02 176.30 us 10.00 us 765.00 us 10 READDIR
>> 0.07 63.08 us 39.00 us 133.00 us 78 UNLINK
>> 0.13 27.35 us 10.00 us 91.00 us 333 ENTRYLK
>> 0.13 126.89 us 99.00 us 179.00 us 76 MKNOD
>> 0.42 116.70 us 8.00 us 8661.00 us 261 LOOKUP
>> 28.73 51.79 us 22.00 us 2574.00 us 39822 FXATTROP
>> 29.52 53.87 us 16.00 us 3290.00 us 39328 WRITE
>> 40.92 24.71 us 10.00 us 3224.00 us 118864 FINODELK
>>
>> Duration: 189 seconds
>> Data Read: 0 bytes
>> Data Written: 39328 bytes
>>
>> Interval 0 Stats:
>> Block Size: 1b+
>> No. of Reads: 0
>> No. of Writes: 39328
>> %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop
>> --------- ----------- ----------- ----------- ------------ ----
>> 0.00 0.00 us 0.00 us 0.00 us 2 FORGET
>> 0.00 0.00 us 0.00 us 0.00 us 12 RELEASE
>> 0.00 0.00 us 0.00 us 0.00 us 17 RELEASEDIR
>> 0.00 101.00 us 101.00 us 101.00 us 1 FSTAT
>> 0.00 51.50 us 20.00 us 81.00 us 4 FLUSH
>> 0.01 219.50 us 188.00 us 251.00 us 2 CREATE
>> 0.01 43.45 us 11.00 us 90.00 us 11 GETXATTR
>> 0.01 62.30 us 38.00 us 119.00 us 10 OPEN
>> 0.01 50.59 us 1.00 us 102.00 us 17 OPENDIR
>> 0.01 24.60 us 12.00 us 64.00 us 40 INODELK
>> 0.02 176.30 us 10.00 us 765.00 us 10 READDIR
>> 0.07 63.08 us 39.00 us 133.00 us 78 UNLINK
>> 0.13 27.35 us 10.00 us 91.00 us 333 ENTRYLK
>> 0.13 126.89 us 99.00 us 179.00 us 76 MKNOD
>> 0.42 116.70 us 8.00 us 8661.00 us 261 LOOKUP
>> 28.73 51.79 us 22.00 us 2574.00 us 39822 FXATTROP
>> 29.52 53.87 us 16.00 us 3290.00 us 39328 WRITE
>> 40.92 24.71 us 10.00 us 3224.00 us 118864 FINODELK
>>
>> Duration: 189 seconds
>> Data Read: 0 bytes
>> Data Written: 39328 bytes
>>
>> Brick: ovirt2.localdomain:/gluster_bricks/data/data
>> ---------------------------------------------------
>> Cumulative Stats:
>> Block Size: 512b+ 131072b+
>> No. of Reads: 0 0
>> No. of Writes: 36 76758
>> %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop
>> --------- ----------- ----------- ----------- ------------ ----
>> 0.00 0.00 us 0.00 us 0.00 us 6 FORGET
>> 0.00 0.00 us 0.00 us 0.00 us 87 RELEASE
>> 0.00 0.00 us 0.00 us 0.00 us 96 RELEASEDIR
>> 0.00 100.50 us 80.00 us 121.00 us 2 REMOVEXATTR
>> 0.00 101.00 us 101.00 us 101.00 us 2 SETXATTR
>> 0.00 36.18 us 22.00 us 62.00 us 11 FLUSH
>> 0.00 57.44 us 42.00 us 77.00 us 9 FTRUNCATE
>> 0.00 82.56 us 59.00 us 138.00 us 9 FSTAT
>> 0.00 89.42 us 67.00 us 161.00 us 12 SETATTR
>> 0.00 272.40 us 235.00 us 296.00 us 5 CREATE
>> 0.01 154.28 us 88.00 us 320.00 us 18 XATTROP
>> 0.01 45.29 us 1.00 us 319.00 us 96 OPENDIR
>> 0.01 86.69 us 30.00 us 379.00 us 62 STAT
>> 0.01 64.30 us 47.00 us 169.00 us 84 OPEN
>> 0.02 107.34 us 23.00 us 273.00 us 73 READDIRP
>> 0.02 4688.00 us 86.00 us 9290.00 us 2 TRUNCATE
>> 0.02 59.29 us 13.00 us 394.00 us 165 GETXATTR
>> 0.03 128.51 us 27.00 us 338.00 us 96 FSYNC
>> 0.03 240.75 us 14.00 us 1943.00 us 52 READDIR
>> 0.04 65.59 us 26.00 us 293.00 us 279 STATFS
>> 0.06 180.77 us 118.00 us 306.00 us 148 MKNOD
>> 0.14 37.98 us 17.00 us 192.00 us 1598 INODELK
>> 0.67 91.68 us 12.00 us 1141.00 us 3186 LOOKUP
>> 10.10 55.92 us 28.00 us 1658.00 us 78608 FXATTROP
>> 11.89 6814.76 us 18.00 us 301246.00 us 760 ENTRYLK
>> 19.44 36.55 us 14.00 us 2353.00 us 231535 FINODELK
>> 25.21 142.92 us 62.00 us 593.00 us 76794 WRITE
>> 32.28 91283.68 us 28.00 us 316658.00 us 154 UNLINK
>>
>> Duration: 1206 seconds
>> Data Read: 0 bytes
>> Data Written: 10060843008 bytes
>>
>> Interval 0 Stats:
>> Block Size: 512b+ 131072b+
>> No. of Reads: 0 0
>> No. of Writes: 36 76758
>> %-latency Avg-latency Min-Latency Max-Latency No. of calls Fop
>> --------- ----------- ----------- ----------- ------------ ----
>> 0.00 0.00 us 0.00 us 0.00 us 6 FORGET
>> 0.00 0.00 us 0.00 us 0.00 us 87 RELEASE
>> 0.00 0.00 us 0.00 us 0.00 us 96 RELEASEDIR
>> 0.00 100.50 us 80.00 us 121.00 us 2 REMOVEXATTR
>> 0.00 101.00 us 101.00 us 101.00 us 2 SETXATTR
>> 0.00 36.18 us 22.00 us 62.00 us 11 FLUSH
>> 0.00 57.44 us 42.00 us 77.00 us 9 FTRUNCATE
>> 0.00 82.56 us 59.00 us 138.00 us 9 FSTAT
>> 0.00 89.42 us 67.00 us 161.00 us 12 SETATTR
>> 0.00 272.40 us 235.00 us 296.00 us 5 CREATE
>> 0.01 154.28 us 88.00 us 320.00 us 18 XATTROP
>> 0.01 45.29 us 1.00 us 319.00 us 96 OPENDIR
>> 0.01 86.69 us 30.00 us 379.00 us 62 STAT
>> 0.01 64.30 us 47.00 us 169.00 us 84 OPEN
>> 0.02 107.34 us 23.00 us 273.00 us 73 READDIRP
>> 0.02 4688.00 us 86.00 us 9290.00 us 2 TRUNCATE
>> 0.02 59.29 us 13.00 us 394.00 us 165 GETXATTR
>> 0.03 128.51 us 27.00 us 338.00 us 96 FSYNC
>> 0.03 240.75 us 14.00 us 1943.00 us 52 READDIR
>> 0.04 65.59 us 26.00 us 293.00 us 279 STATFS
>> 0.06 180.77 us 118.00 us 306.00 us 148 MKNOD
>> 0.14 37.98 us 17.00 us 192.00 us 1598 INODELK
>> 0.67 91.66 us 12.00 us 1141.00 us 3186 LOOKUP
>> 10.10 55.92 us 28.00 us 1658.00 us 78608 FXATTROP
>> 11.89 6814.76 us 18.00 us 301246.00 us 760 ENTRYLK
>> 19.44 36.55 us 14.00 us 2353.00 us 231535 FINODELK
>> 25.21 142.92 us 62.00 us 593.00 us 76794 WRITE
>> 32.28 91283.68 us 28.00 us 316658.00 us 154 UNLINK
>>
>> Duration: 1206 seconds
>> Data Read: 0 bytes
>> Data Written: 10060843008 bytes
>>
>>
>>
>> This indicates to me that it's not a problem in Disk/LVM/FileSystem layout.
>>
>> Most probably I haven't created the volume properly or some option/feature is disabled ?!?
>> Network shows OK for a gigabit:
>> [root@ovirt1 data]# dd if=/dev/zero status=progress | nc ovirt2 9999
>> 3569227264 bytes (3.6 GB) copied, 29.001052 s, 123 MB/s^C
>> 7180980+0 records in
>> 7180979+0 records out
>> 3676661248 bytes (3.7 GB) copied, 29.8739 s, 123 MB/s
>>
>>
>> I'm looking for any help... you can share your volume info also.
>>
>> Thanks in advance.
>>
>> Best Regards,
>> Strahil Nikolov
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users(a)gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-users
>
>
>
> --
> Amar Tumballi (amarts)
_______________________________________________
Gluster-users mailing list
Gluster-users(a)gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users
6 years, 3 months
[ANN] oVirt 4.3.0 Third Release Candidate is now available for testing
by Sandro Bonazzola
The oVirt Project is pleased to announce the availability of the Third
Release Candidate of oVirt 4.3.0, as of January 24th, 2018
This is pre-release software. This pre-release should not to be used in
production.
Please take a look at our community page[1] to learn how to ask questions
and interact with developers and users.
All issues or bugs should be reported via oVirt Bugzilla[2].
This update is the first release candidate of the 4.3.0 version.
This release brings more than 130 enhancements and more than 450 bug fixes
on top of oVirt 4.2 series.
What's new in oVirt 4.3.0?
* Q35 chipset, support booting using UEFI and Secure Boot
* Skylake-server and AMD EPYC support
* New smbus driver in windows guest tools
* Improved support for v2v
* OVA export / import of Templates
* Full support for live migration of High Performance VMs
* Microsoft Failover clustering support (SCSI Persistent Reservation) for
Direct LUN disks
* Hundreds of bug fixes on top of oVirt 4.2 series
* New VM portal details page (see a preview here:
https://imgur.com/a/ExINpci)
* New Cluster upgrade UI
* OVN security groups
* IPv6 (static host addresses)
* Support of Neutron from RDO OpenStack 13 as external network provider
* Support of using Skydive from RDO OpenStack 14 as Tech Preview
* Support for 3.6 and 4.0 data centers, clusters and hosts were removed
* Now using PostgreSQL 10
* New metrics support using rsyslog instead of fluentd
This release is available now on x86_64 architecture for:
* Red Hat Enterprise Linux 7.6 or later
* CentOS Linux (or similar) 7.6 or later
This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
for:
* Red Hat Enterprise Linux 7.6 or later
* CentOS Linux (or similar) 7.6 or later
* oVirt Node 4.3 (available for x86_64 only)
Experimental tech preview for x86_64 and s390x architectures for Fedora 28
is also included.
See the release notes draft [3] for installation / upgrade instructions and
a list of new features and bugs fixed.
Notes:
- oVirt Appliance is already available for both CentOS 7 and Fedora 28
(tech preview).
- oVirt Node NG is already available for CentOS 7
- oVirt Node NG for Fedora 28 (tech preview) is being delayed due to build
issues with the build system.
Additional Resources:
* Read more about the oVirt 4.3.0 release highlights:
http://www.ovirt.org/release/4.3.0/
* Get more oVirt project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/
[1] https://www.ovirt.org/community/
[2] https://bugzilla.redhat.com/enter_bug.cgi?classification=oVirt
[3] http://www.ovirt.org/release/4.3.0/
[4] http://resources.ovirt.org/pub/ovirt-4.3-pre/iso/
--
SANDRO BONAZZOLA
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://red.ht/sig>
6 years, 3 months
Re: Unable to deatch/remove ISO DOMAIN
by Strahil Nikolov
Hi Martin,
this is my history (please keep in mind that it might get distorted due to mail client).Note: I didn't stop the ovirt-engine.service and this caused some errors to be logged - but the engine is still working without issues. As I said - this is my test lab and I was willing to play around :)
Good Luck!
ssh root@engine
#Switch to postgre usersu - postgres
#If you don't load this , there will be no path for psql , nor it will start at allsource /opt/rh/rh-postgresql95/enable
#open the DB. psql engine
#Commands in the DB:select id, storage_name from storage_domain_static;
select storage_domain_id, ovf_disk_id from storage_domains_ovf_info where storag e_domain_id='fbe7bf1a-2f03-4311-89fa-5031eab638bf';
delete from storage_domain_dynamic where id = 'fbe7bf1a-2f03-4311-89fa-5031eab63 8bf';
delete from storage_domain_static where id = 'fbe7bf1a-2f03-4311-89fa-5031eab638 bf';
delete from base_disks where disk_id = '7a155ede-5317-4860-aa93-de1dc283213e';delete from base_disks where disk_id = '7dedd0e1-8ce8-444e-8a3d-117c46845bb0';
delete from storage_domains_ovf_info where storage_domain_id = 'fbe7bf1a-2f03-43 11-89fa-5031eab638bf';
delete from storage_pool_iso_map where storage_id = 'fbe7bf1a-2f03-4311-89fa-503 1eab638bf';
#I think this shows all tables:select table_schema ,table_name from information_schema.tables order by table_sc hema,table_name;#Maybe you don't need this one and you need to find the NFS volume:select * from gluster_volumes ;delete from gluster_volumes where id = '9b06a1e9-8102-4cd7-bc56-84960a1efaa2';
select table_schema ,table_name from information_schema.tables order by table_sc hema,table_name;
# The previous delete failed as there was an entry in storage_server_connections.#In your case could be differentselect * from storage_server_connections;delete from storage_server_connections where id = '490ee1c7-ae29-45c0-bddd-61708 22c8490';delete from gluster_volumes where id = '9b06a1e9-8102-4cd7-bc56-84960a1efaa2';
Best Regards,Strahil Nikolov
В петък, 25 януари 2019 г., 11:04:01 ч. Гринуич+2, Martin Humaj <mhumaj(a)gmail.com> написа:
Hi StrahilI have tried to use the same ip and nfs export to replace the original, did not work properly.
If you can guide me how to do it in engine DB i would appreciate it. This is a test system.
thank you Martin
On Fri, Jan 25, 2019 at 9:56 AM Strahil <hunter86_bg(a)yahoo.com> wrote:
Can you create a temporary NFS server which to be accessed during the removal?I have managed to edit the engine's DB to get rid of cluster domain, but this is not recommended for production systems :)
6 years, 3 months
Unable to deatch/remove ISO DOMAIN
by mhumaj@gmail.com
Ovirt - 4.2.4.5-1.el7
Is there any way how to remove the nfs ISO domain in DB? We cannot get rid of it in GUI and we are not able to use it anymore. The problem is that NFS server which was responsible for DATA TYPE ISO domain was deleted. Even we are trying to change it in settings it will not allow us to do it.
Error messages:
Failed to activate Storage Domain oVirt-ISO (Data Center InnovationCenter) by admin@internal-authz
VDSM command ActivateStorageDomainVDS failed: Storage domain does not exist: (u'61045461-10ff-4f7a-b464-67198c4a6c27',)
tank you
6 years, 3 months
Re: ovirt 4.2 HCI rollout
by Simone Tiraboschi
On Thu, Jan 24, 2019 at 3:20 PM Markus Schaufler <
markus.schaufler(a)digit-all.at> wrote:
> no...
>
> all logs in that folder are attached in the mail before.
>
OK, unfortunately in this case I can just suggest to retry and, when it
reaches
[ INFO ] TASK [Check engine VM health]
try to connect to the engine VM via ssh and check what's happening there to
ovirt-engine
> ------------------------------
> *Von:* Simone Tiraboschi <stirabos(a)redhat.com>
> *Gesendet:* Donnerstag, 24. Jänner 2019 15:16:52
> *An:* Markus Schaufler
> *Cc:* Dominik Holler; users(a)ovirt.org
> *Betreff:* Re: [ovirt-users] ovirt 4.2 HCI rollout
>
>
>
> On Thu, Jan 24, 2019 at 3:14 PM Markus Schaufler <
> markus.schaufler(a)digit-all.at> wrote:
>
> The hosted engine is not running and cannot be started.
>
>
>
> Do you have on your first host a directory
> like /var/log/ovirt-hosted-engine-setup/engine-logs-2019-01-21T22:47:03Z
> with logs from the engine VM?
>
>
>
> ------------------------------
> *Von:* Simone Tiraboschi <stirabos(a)redhat.com>
> *Gesendet:* Donnerstag, 24. Jänner 2019 14:45:59
> *An:* Markus Schaufler
> *Cc:* Dominik Holler; users(a)ovirt.org
> *Betreff:* Re: [ovirt-users] ovirt 4.2 HCI rollout
>
>
>
> On Thu, Jan 24, 2019 at 2:21 PM Markus Schaufler <
> markus.schaufler(a)digit-all.at> wrote:
>
> Hi,
>
>
> thanks for the replies.
>
>
> I updated to 4.2.8 and tried again:
>
>
> [ INFO ] TASK [Check engine VM health]
> [ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 120, "changed":
> true, "cmd": ["hosted-engine", "--vm-status", "--json"], "delta":
> "0:00:00.165316", "end": "2019-01-24 14:12:06.899564", "rc": 0, "start":
> "2019-01-24 14:12:06.734248", "stderr": "", "stderr_lines": [], "stdout":
> "{\"1\": {\"conf_on_shared_storage\": true, \"live-data\": true, \"extra\":
> \"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=3049
> (Thu Jan 24 14:11:59
> 2019)\\nhost-id=1\\nscore=3400\\nvm_conf_refresh_time=3049 (Thu Jan 24
> 14:11:59
> 2019)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineStarting\\nstopped=False\\n\",
> \"hostname\": \"HCI01.res01.ads.ooe.local\", \"host-id\": 1,
> \"engine-status\": {\"reason\": \"failed liveliness check\", \"health\":
> \"bad\", \"vm\": \"up\", \"detail\": \"Up\"}, \"score\": 3400, \"stopped\":
> false, \"maintenance\": false, \"crc32\": \"0c1a3ddb\",
> \"local_conf_timestamp\": 3049, \"host-ts\": 3049}, \"global_maintenance\":
> false}", "stdout_lines": ["{\"1\": {\"conf_on_shared_storage\": true,
> \"live-data\": true, \"extra\":
> \"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=3049
> (Thu Jan 24 14:11:59
> 2019)\\nhost-id=1\\nscore=3400\\nvm_conf_refresh_time=3049 (Thu Jan 24
> 14:11:59
> 2019)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineStarting\\nstopped=False\\n\",
> \"hostname\": \"HCI01.res01.ads.ooe.local\", \"host-id\": 1,
> \"engine-status\": {\"reason\": \"failed liveliness check\", \"health\":
> \"bad\", \"vm\": \"up\", \"detail\": \"Up\"}, \"score\": 3400, \"stopped\":
> false, \"maintenance\": false, \"crc32\": \"0c1a3ddb\",
> \"local_conf_timestamp\": 3049, \"host-ts\": 3049}, \"global_maintenance\":
> false}"]}
>
>
>
> It's still the same issue: the host fail to properly check the status of
> the engine over a dedicate health page.
>
> You should connect to ovirt-hci.res01.ads.ooe.local and check the status
> of ovirt-engine service and /var/log/ovirt-engine/engine.log there.
>
>
>
> [ INFO ] TASK [Check VM status at virt level]
> [ INFO ] changed: [localhost]
> [ INFO ] TASK [Fail if engine VM is not running]
> [ INFO ] skipping: [localhost]
> [ INFO ] TASK [Get target engine VM IPv4 address]
> [ INFO ] changed: [localhost]
> [ INFO ] TASK [Get VDSM's target engine VM stats]
> [ INFO ] changed: [localhost]
> [ INFO ] TASK [Convert stats to JSON format]
> [ INFO ] ok: [localhost]
> [ INFO ] TASK [Get target engine VM IPv4 address from VDSM stats]
> [ INFO ] ok: [localhost]
> [ INFO ] TASK [Fail if the Engine has no IP address]
> [ INFO ] skipping: [localhost]
> [ INFO ] TASK [Fail if Engine IP is different from engine's FQDN resolved
> IP]
> [ INFO ] skipping: [localhost]
> [ INFO ] TASK [Get target engine VM IPv4 address]
> [ INFO ] skipping: [localhost]
> [ INFO ] TASK [Reconfigure OVN central address]
> [ ERROR ] fatal: [localhost]: FAILED! => {"msg": "The task includes an
> option with an undefined variable. The error was: 'dict object' has no
> attribute 'stdout_lines'\n\nThe error appears to have been in
> '/usr/share/ovirt-hosted-engine-setup/ansible/create_target_vm.yml': line
> 518, column 5, but may\nbe elsewhere in the file depending on the exact
> syntax problem.\n\nThe offending line appears to be:\n\n #
> https://github.com/oVirt/ovirt-engine/blob/master/packaging/playbooks/rol...
> - name: Reconfigure OVN central address\n ^ here\n"}
>
>
>
> attached you'll find the setup logs.
>
>
> best regards,
>
> Markus Schaufler
> ------------------------------
> *Von:* Simone Tiraboschi <stirabos(a)redhat.com>
> *Gesendet:* Donnerstag, 24. Jänner 2019 11:56:50
> *An:* Dominik Holler
> *Cc:* Markus Schaufler; users(a)ovirt.org
> *Betreff:* Re: [ovirt-users] ovirt 4.2 HCI rollout
>
>
>
> On Thu, Jan 24, 2019 at 9:40 AM Dominik Holler <dholler(a)redhat.com> wrote:
>
> On Tue, 22 Jan 2019 11:15:12 +0000
> Markus Schaufler <markus.schaufler(a)digit-all.at> wrote:
>
> > Thanks for your reply,
> >
> > getent ahosts ovirt-hci.res01.ads.ooe.local | cut -d' ' -f1 | uniq
> > 10.1.31.20
> >
> > attached you'll find the logs.
> >
>
> Thanks, to my eyes this looks like a bug.
> I tried to isolate the relevant lines in the attached playbook.
>
> Markus, would you be so kind to check if ovirt-4.2.8 is working for you?
>
>
>
> OK, understood: the real error was just a few lines before what Dominik
> pointed out:
>
> "stdout": "{\"1\": {\"conf_on_shared_storage\": true, \"live-data\":
> true, \"extra\":
> \"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=5792
> (Mon Jan 21 13:57:45
> 2019)\\nhost-id=1\\nscore=3000\\nvm_conf_refresh_time=5792 (Mon Jan 21
> 13:57:45
> 2019)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineStarting\\nstopped=False\\n\",
> \"hostname\": \"HCI01.res01.ads.ooe.local\", \"host-id\": 1,
> \"engine-status\": {\"reason\": \"failed liveliness check\", \"health\":
> \"bad\", \"vm\": \"up\", \"detail\": \"Up\"}, \"score\": 3000, \"stopped\":
> false, \"maintenance\": false, \"crc32\": \"ba303717\",
> \"local_conf_timestamp\": 5792, \"host-ts\": 5792}, \"global_maintenance\":
> false}",
> "stdout_lines": [
> "{\"1\": {\"conf_on_shared_storage\": true, \"live-data\": true,
> \"extra\":
> \"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=5792
> (Mon Jan 21 13:57:45
> 2019)\\nhost-id=1\\nscore=3000\\nvm_conf_refresh_time=5792 (Mon Jan 21
> 13:57:45
> 2019)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineStarting\\nstopped=False\\n\",
> \"hostname\": \"HCI01.res01.ads.ooe.local\", \"host-id\": 1,
> \"engine-status\": {\"reason\": \"failed liveliness check\", \"health\":
> \"bad\", \"vm\": \"up\", \"detail\": \"Up\"}, \"score\": 3000, \"stopped\":
> false, \"maintenance\": false, \"crc32\": \"ba303717\",
> \"local_conf_timestamp\": 5792, \"host-ts\": 5792}, \"global_maintenance\":
> false}"
> ]
> }"
> 2019-01-21 13:57:46,695+0100 ERROR ansible failed {'status': 'FAILED',
> 'ansible_type': 'task', 'ansible_task': u'Check engine VM health',
> 'ansible_result': u'type: <type \'dict\'>\nstr: {\'_ansible_parsed\': True,
> \'stderr_lines\': [], u\'changed\': True, u\'end\': u\'2019-01-21
> 13:57:46.242423\', \'_ansible_no_log\': False, u\'stdout\': u\'{"1":
> {"conf_on_shared_storage": true, "live-data": true, "extra":
> "metadata_parse_version=1\\\\nmetadata_feature_version=1\\\\ntimestamp=5792
> (Mon Jan 21 13:57:4', 'ansible_host': u'localhost', 'ansible_playbook':
> u'/usr/share/ovirt-hosted-engine-setup/ansible/create_target_vm.yml'}
>
> and in particular it's here:
> for some reason we got \"engine-status\": {\"reason\": \"failed
> liveliness check\", \"health\": \"bad\", \"vm\": \"up\", \"detail\":
> \"Up\"}
> over 120 attempts: we have to check engine.log (it got collected as well
> from the engine VM) to understand why the engine was failing to start.
>
>
>
>
> > ________________________________
> > Von: Dominik Holler <dholler(a)redhat.com>
> > Gesendet: Montag, 21. Jänner 2019 17:52:35
> > An: Markus Schaufler
> > Cc: users(a)ovirt.org; Simone Tiraboschi
> > Betreff: Re: [ovirt-users] ovirt 4.2 HCI rollout
> >
> > Would you please share the related ovirt-host-deploy-ansible-*.log
> > stored on the host in /var/log/ovirt-hosted-engine-setup ?
> >
> > Would you please also share the output of
> > getent ahosts YOUR_HOSED_ENGNE_FQDN | cut -d' ' -f1 | uniq
> > if executed on this host?
> >
> >
> > On Mon, 21 Jan 2019 13:37:53 -0000
> > "Markus Schaufler" <markus.schaufler(a)digit-all.at> wrote:
> >
> > > Hi,
> > >
> > > I'm trying a (nested) ovirt 4.2.7 HCI rollout on 3 centos VM's by
> > > following
> > >
> https://ovirt.org/documentation/gluster-hyperconverged/chap-Deploying_Hyp...
> > > gluster deployment was successful but at HE deployment "stage 5" I
> > > got following error:
> > >
> > > [ INFO ] TASK [Reconfigure OVN central address]
> > > [ ERROR ] fatal: [localhost]: FAILED! => {"msg": "The task includes
> > > an option with an undefined variable. The error was: 'dict object'
> > > has no attribute 'stdout_lines'\n\nThe error appears to have been
> > > in
> > > '/usr/share/ovirt-hosted-engine-setup/ansible/create_target_vm.yml':
> > > line 522, column 5, but may\nbe elsewhere in the file depending on
> > > the exact syntax problem.\n\nThe offending line appears to be:\n\n
> > > #
> > >
> https://github.com/oVirt/ovirt-engine/blob/master/packaging/playbooks/rol...
> > > - name: Reconfigure OVN central address\n ^ here\n"}
> > >
> > >
> > > /var/log/messages:
> > > Jan 21 14:09:56 HCI01 journal: ovirt-ha-agent
> > > ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine ERROR
> > > Engine VM stopped on localhost Jan 21 14:10:01 HCI01 systemd:
> > > Started Session 22 of user root. Jan 21 14:10:02 HCI01 systemd:
> > > Started Session c306 of user root. Jan 21 14:10:03 HCI01 systemd:
> > > Started Session c307 of user root. Jan 21 14:10:06 HCI01
> > > vdsm[3650]: WARN executor state: count=5 workers=set([<Worker
> > > name=periodic/4 waiting task#=141 at 0x7fd2d4316910>, <Worker
> > > name=periodic/1 running <Task discardable <Operation
> > > action=<vdsm.virt.sampling.VMBulkstatsMonitor object at
> > > 0x7fd2d4679490> at 0x7fd2d4679710> timeout=7.5, duration=7 at
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> disca rded task#=413 at
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>, <Worker
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>name=periodic/3
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>waiting task#=414 at
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>,
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10><Worker
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>name=periodic/5
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>waiting
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>task#=0
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>at
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>,
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650><Worker
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>name
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>=periodic/2
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>waiting
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>task#=412
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>at
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>])
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>Jan
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>21
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>14:10:06
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>HCI01
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>kernel:
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>ovirtmgmt:
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>port
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>2(vnet0)
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>entered
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>disabled
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>state
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>Jan
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>21
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>14:10:06
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>HCI01
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>kernel:
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>device
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>vnet0
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>left
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>promiscuous
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>mode
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>Jan
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>21
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>14:10:06
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>HCI01
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>kernel:
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>ovirtmgmt:
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>port
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>2(vnet0)
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>entered
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>disabled
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>state
> > > Jan 21 14:10:06 HCI01 NetworkManager[3666]: <info>
> > > [1548076206.9177] device (vnet0): state change: disconnected ->
> > > unmanaged (reason 'unmanaged', sys-iface-state: 'remo ved') Jan 21
> > > 14:10:06 HCI01 NetworkManager[3666]: <info> [1548076206.9180]
> > > device (vnet0): released from master device ovirtmgmt Jan 21
> > > 14:10:06 HCI01 lldpad: recvfrom(Event interface): No buffer space
> > > available Jan 21 14:10:06 HCI01 libvirtd: 2019-01-21
> > > 13:10:06.925+0000: 2651: error : qemuMonitorIORead:609 : Unable to
> > > read from monitor: Connection reset by peer Jan 21 14:10:07 HCI01
> > > kvm: 0 guests now active Jan 21 14:10:07 HCI01 systemd-machined:
> > > Machine qemu-3-HostedEngine terminated. Jan 21 14:10:07 HCI01
> > > libvirtd: 2019-01-21 13:10:07.125+0000: 2704: warning :
> > > qemuGetProcessInfo:1406 : cannot parse process status data Jan 21
> > > 14:10:07 HCI01 libvirtd: 2019-01-21 13:10:07.125+0000: 2704:
> > > warning : qemuGetProcessInfo:1406 : cannot parse process status
> > > data Jan 21 14:10:07 HCI01 libvirtd: 2019-01-21 13:10:07.125+0000:
> > > 2704: warning : qemuGetProcessInfo:1406 : cannot parse process
> > > status data Jan 21 14:10:07 HCI01 libvirtd: 2019-01-21
> > > 13:10:07.125+0000: 2704: warning : qemuGetProcessInfo:1406 : cannot
> > > parse process status data Jan 21 14:10:07 HCI01 libvirtd:
> > > 2019-01-21 13:10:07.126+0000: 2704: error :
> > > virNetDevTapInterfaceStats:764 : internal error: /proc/net/dev:
> > > Interface not found Jan 21 14:10:07 HCI01 firewalld[24040]:
> > > WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -D libvirt-out
> > > -m physdev --physdev-is-bridged --physdev-out vnet0 -g FP-vnet 0'
> > > failed: iptables v1.4.21: goto 'FP-vnet0' is not a chain#012#012Try
> > > `iptables -h' or 'iptables --help' for more information. Jan 21
> > > 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/iptables -w2 -w -D libvirt-out -m physdev --physdev-out
> > > vnet0 -g FP-vnet0' failed: iptables v 1.4.21: goto 'FP-vnet0' is
> > > not a chain#012#012Try `iptables -h' or 'iptables --help' for more
> > > information. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -D libvirt-in -m physdev
> > > --physdev-in vnet0 -g FJ-vnet0' failed: iptables v1. 4.21: goto
> > > 'FJ-vnet0' is not a chain#012#012Try `iptables -h' or 'iptables
> > > --help' for more information. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -D libvirt-host-in -m physdev --physdev-in vnet0 -g HJ-vnet0'
> > > failed: iptable s v1.4.21: goto 'HJ-vnet0' is not a
> > > chain#012#012Try `iptables -h' or 'iptables --help' for more
> > > information. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F FP-vnet0' failed:
> > > iptables: No chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -X FP-vnet0' failed: iptables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F FJ-vnet0' failed:
> > > iptables: No chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -X FJ-vnet0' failed: iptables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F HJ-vnet0' failed:
> > > iptables: No chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -X HJ-vnet0' failed: iptables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-out -m
> > > physdev --physdev-is-bridged --physdev-out vnet0 -g FP-vne t0'
> > > failed: ip6tables v1.4.21: goto 'FP-vnet0' is not a
> > > chain#012#012Try `ip6tables -h' or 'ip6tables --help' for more
> > > information. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-out -m
> > > physdev --physdev-out vnet0 -g FP-vnet0' failed: ip6tables v1.4.21:
> > > goto 'FP-vnet0' is not a chain#012#012Try `ip6tables -h' or
> > > 'ip6tables --help' for more information. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> > > -w -D libvirt-in -m physdev --physdev-in vnet0 -g FJ-vnet0' failed:
> > > ip6tables v 1.4.21: goto 'FJ-vnet0' is not a chain#012#012Try
> > > `ip6tables -h' or 'ip6tables --help' for more information. Jan 21
> > > 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ip6tables -w2 -w -D libvirt-host-in -m physdev
> > > --physdev-in vnet0 -g HJ-vnet0' failed: ip6tab les v1.4.21: goto
> > > 'HJ-vnet0' is not a chain#012#012Try `ip6tables -h' or 'ip6tables
> > > --help' for more information. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> > > -w -F FP-vnet0' failed: ip6tables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -X FP-vnet0' failed:
> > > ip6tables: No chain/target/match by that name. Jan 21 14:10:07
> > > HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ip6tables -w2 -w -F FJ-vnet0' failed: ip6tables: No
> > > chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> > > -w -X FJ-vnet0' failed: ip6tables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -F HJ-vnet0' failed:
> > > ip6tables: No chain/target/match by that name. Jan 21 14:10:07
> > > HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ip6tables -w2 -w -X HJ-vnet0' failed: ip6tables: No
> > > chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> > > --concurrent -t nat -D PREROUTING -i vnet0 -j libvirt-J-vnet0'
> > > failed: Illegal target name 'libvirt-J-vnet0'. Jan 21 14:10:07
> > > HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ebtables --concurrent -t nat -D POSTROUTING -o vnet0 -j
> > > libvirt-P-vnet0' failed: Illegal target name 'libvirt-P-vnet0'. Jan
> > > 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ebtables --concurrent -t nat -D PREROUTING -i vnet0 -j
> > > libvirt-J-vnet0' failed: Illegal targe t name 'libvirt-J-vnet0'.
> > > Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ebtables --concurrent -t nat -D POSTROUTING -o vnet0 -j
> > > libvirt-P-vnet0' failed: Illegal targ et name 'libvirt-P-vnet0'.
> > > Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ebtables --concurrent -t nat -L libvirt-J-vnet0' failed:
> > > Chain 'libvirt-J-vnet0' doesn't exis t. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> > > --concurrent -t nat -L libvirt-P-vnet0' failed: Chain
> > > 'libvirt-P-vnet0' doesn't exist. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> > > --concurrent -t nat -F libvirt-J-vnet0' failed: Chain
> > > 'libvirt-J-vnet0' doesn't exist. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> > > --concurrent -t nat -X libvirt-J-vnet0' failed: Chain
> > > 'libvirt-J-vnet0' doesn't exist. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> > > --concurrent -t nat -F libvirt-P-vnet0' failed: Chain
> > > 'libvirt-P-vnet0' doesn't exist. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> > > --concurrent -t nat -X libvirt-P-vnet0' failed: Chain
> > > 'libvirt-P-vnet0' doesn't exist. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -D libvirt-out -m physdev --physdev-is-bridged --physdev-out
> > > vnet0 -g FO-vnet0' failed: iptables v1.4.21: goto 'FO-vnet0' is not
> > > a chain#012#012Try `iptables -h' or 'iptables --help' for more
> > > information. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -D libvirt-out -m
> > > physdev --physdev-out vnet0 -g FO-vnet0' failed: iptables v1.4.21:
> > > goto 'FO-vnet0' is not a chain#012#012Try `iptables -h' or
> > > 'iptables --help' for more information. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -D libvirt-in -m physdev --physdev-in vnet0 -g FI-vnet0' failed:
> > > iptables v1.4.21: goto 'FI-vnet0' is not a chain#012#012Try
> > > `iptables -h' or 'iptables --help' for more information. Jan 21
> > > 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/iptables -w2 -w -D libvirt-host-in -m physdev
> > > --physdev-in vnet0 -g HI-vnet0' failed: iptables v1.4.21: goto
> > > 'HI-vnet0' is not a chain#012#012Try `iptables -h' or 'iptables
> > > --help' for more information. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -D libvirt-in-post -m physdev --physdev-in vnet0 -j ACCEPT'
> > > failed: iptables: Bad rule (does a matching rule exist in that
> > > chain?). Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F FO-vnet0' failed:
> > > iptables: No chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -X FO-vnet0' failed: iptables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F FI-vnet0' failed:
> > > iptables: No chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -X FI-vnet0' failed: iptables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F HI-vnet0' failed:
> > > iptables: No chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -X HI-vnet0' failed: iptables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-out -m
> > > physdev --physdev-is-bridged --physdev-out vnet0 -g FO-vnet0'
> > > failed: ip6tables v1.4.21: goto 'FO-vnet0' is not a
> > > chain#012#012Try `ip6tables -h' or 'ip6tables --help' for more
> > > information. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-out -m
> > > physdev --physdev-out vnet0 -g FO-vnet0' failed: ip6tables v1.4.21:
> > > goto 'FO-vnet0' is not a chain#012#012Try `ip6tables -h' or
> > > 'ip6tables --help' for more information. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> > > -w -D libvirt-in -m physdev --physdev-in vnet0 -g FI-vnet0' failed:
> > > ip6tables v1.4.21: goto 'FI-vnet0' is not a chain#012#012Try
> > > `ip6tables -h' or 'ip6tables --help' for more information. Jan 21
> > > 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ip6tables -w2 -w -D libvirt-host-in -m physdev
> > > --physdev-in vnet0 -g HI-vnet0' failed: ip6tables v1.4.21: goto
> > > 'HI-vnet0' is not a chain#012#012Try `ip6tables -h' or 'ip6tables
> > > --help' for more information. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> > > -w -D libvirt-in-post -m physdev --physdev-in vnet0 -j ACCEPT'
> > > failed: ip6tables: Bad rule (does a matching rule exist in that
> > > chain?). Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -F FO-vnet0' failed:
> > > ip6tables: No chain/target/match by that name. Jan 21 14:10:07
> > > HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ip6tables -w2 -w -X FO-vnet0' failed: ip6tables: No
> > > chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> > > -w -F FI-vnet0' failed: ip6tables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -X FI-vnet0' failed:
> > > ip6tables: No chain/target/match by that name. Jan 21 14:10:07
> > > HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ip6tables -w2 -w -F HI-vnet0' failed: ip6tables: No
> > > chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> > > -w -X HI-vnet0' failed: ip6tables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -D
> > > POSTROUTING -o vnet0 -j libvirt-O-vnet0' failed: Illegal target
> > > name 'libvirt-O-vnet0'. Jan 21 14:10:07 HCI01 firewalld[24040]:
> > > WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -L
> > > libvirt-O-vnet0' failed: Chain 'libvirt-O-vnet0' doesn't exist. Jan
> > > 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ebtables --concurrent -t nat -F libvirt-O-vnet0' failed:
> > > Chain 'libvirt-O-vnet0' doesn't exist. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> > > --concurrent -t nat -X libvirt-O-vnet0' failed: Chain
> > > 'libvirt-O-vnet0' doesn't exist. Jan 21 14:10:07 HCI01 vdsm[3650]:
> > > WARN
> > > File:
> /var/lib/libvirt/qemu/channels/ea1b312c-a462-45a9-ab75-78008bc4c9c9.ovirt-guest-agent.0
> > > already removed Jan 21 14:10:07 HCI01 vdsm[3650]: WARN Attempting
> > > to remove a non existing network:
> > > ovirtmgmt/ea1b312c-a462-45a9-ab75-78008bc4c9c9 Jan 21 14:10:07
> > > HCI01 vdsm[3650]: WARN Attempting to remove a non existing net
> > > user: ovirtmgmt/ea1b312c-a462-45a9-ab75-78008bc4c9c9 Jan 21
> > > 14:10:07 HCI01 vdsm[3650]: WARN
> > > File:
> /var/lib/libvirt/qemu/channels/ea1b312c-a462-45a9-ab75-78008bc4c9c9.org.qemu.guest_agent.0
> > > already removed
> > >
> > > any ideas on that?
> > > _______________________________________________
> > > Users mailing list -- users(a)ovirt.org
> > > To unsubscribe send an email to users-leave(a)ovirt.org
> > > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> > > oVirt Code of Conduct:
> > > https://www.ovirt.org/community/about/community-guidelines/ List
> > > Archives:
> > >
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/XMMX5CY6VHF...
> >
>
>
6 years, 3 months
Re: ovirt 4.2 HCI rollout
by Simone Tiraboschi
On Thu, Jan 24, 2019 at 3:14 PM Markus Schaufler <
markus.schaufler(a)digit-all.at> wrote:
> The hosted engine is not running and cannot be started.
>
>
>
Do you have on your first host a directory
like /var/log/ovirt-hosted-engine-setup/engine-logs-2019-01-21T22:47:03Z
with logs from the engine VM?
>
> ------------------------------
> *Von:* Simone Tiraboschi <stirabos(a)redhat.com>
> *Gesendet:* Donnerstag, 24. Jänner 2019 14:45:59
> *An:* Markus Schaufler
> *Cc:* Dominik Holler; users(a)ovirt.org
> *Betreff:* Re: [ovirt-users] ovirt 4.2 HCI rollout
>
>
>
> On Thu, Jan 24, 2019 at 2:21 PM Markus Schaufler <
> markus.schaufler(a)digit-all.at> wrote:
>
> Hi,
>
>
> thanks for the replies.
>
>
> I updated to 4.2.8 and tried again:
>
>
> [ INFO ] TASK [Check engine VM health]
> [ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 120, "changed":
> true, "cmd": ["hosted-engine", "--vm-status", "--json"], "delta":
> "0:00:00.165316", "end": "2019-01-24 14:12:06.899564", "rc": 0, "start":
> "2019-01-24 14:12:06.734248", "stderr": "", "stderr_lines": [], "stdout":
> "{\"1\": {\"conf_on_shared_storage\": true, \"live-data\": true, \"extra\":
> \"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=3049
> (Thu Jan 24 14:11:59
> 2019)\\nhost-id=1\\nscore=3400\\nvm_conf_refresh_time=3049 (Thu Jan 24
> 14:11:59
> 2019)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineStarting\\nstopped=False\\n\",
> \"hostname\": \"HCI01.res01.ads.ooe.local\", \"host-id\": 1,
> \"engine-status\": {\"reason\": \"failed liveliness check\", \"health\":
> \"bad\", \"vm\": \"up\", \"detail\": \"Up\"}, \"score\": 3400, \"stopped\":
> false, \"maintenance\": false, \"crc32\": \"0c1a3ddb\",
> \"local_conf_timestamp\": 3049, \"host-ts\": 3049}, \"global_maintenance\":
> false}", "stdout_lines": ["{\"1\": {\"conf_on_shared_storage\": true,
> \"live-data\": true, \"extra\":
> \"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=3049
> (Thu Jan 24 14:11:59
> 2019)\\nhost-id=1\\nscore=3400\\nvm_conf_refresh_time=3049 (Thu Jan 24
> 14:11:59
> 2019)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineStarting\\nstopped=False\\n\",
> \"hostname\": \"HCI01.res01.ads.ooe.local\", \"host-id\": 1,
> \"engine-status\": {\"reason\": \"failed liveliness check\", \"health\":
> \"bad\", \"vm\": \"up\", \"detail\": \"Up\"}, \"score\": 3400, \"stopped\":
> false, \"maintenance\": false, \"crc32\": \"0c1a3ddb\",
> \"local_conf_timestamp\": 3049, \"host-ts\": 3049}, \"global_maintenance\":
> false}"]}
>
>
>
> It's still the same issue: the host fail to properly check the status of
> the engine over a dedicate health page.
>
> You should connect to ovirt-hci.res01.ads.ooe.local and check the status
> of ovirt-engine service and /var/log/ovirt-engine/engine.log there.
>
>
>
> [ INFO ] TASK [Check VM status at virt level]
> [ INFO ] changed: [localhost]
> [ INFO ] TASK [Fail if engine VM is not running]
> [ INFO ] skipping: [localhost]
> [ INFO ] TASK [Get target engine VM IPv4 address]
> [ INFO ] changed: [localhost]
> [ INFO ] TASK [Get VDSM's target engine VM stats]
> [ INFO ] changed: [localhost]
> [ INFO ] TASK [Convert stats to JSON format]
> [ INFO ] ok: [localhost]
> [ INFO ] TASK [Get target engine VM IPv4 address from VDSM stats]
> [ INFO ] ok: [localhost]
> [ INFO ] TASK [Fail if the Engine has no IP address]
> [ INFO ] skipping: [localhost]
> [ INFO ] TASK [Fail if Engine IP is different from engine's FQDN resolved
> IP]
> [ INFO ] skipping: [localhost]
> [ INFO ] TASK [Get target engine VM IPv4 address]
> [ INFO ] skipping: [localhost]
> [ INFO ] TASK [Reconfigure OVN central address]
> [ ERROR ] fatal: [localhost]: FAILED! => {"msg": "The task includes an
> option with an undefined variable. The error was: 'dict object' has no
> attribute 'stdout_lines'\n\nThe error appears to have been in
> '/usr/share/ovirt-hosted-engine-setup/ansible/create_target_vm.yml': line
> 518, column 5, but may\nbe elsewhere in the file depending on the exact
> syntax problem.\n\nThe offending line appears to be:\n\n #
> https://github.com/oVirt/ovirt-engine/blob/master/packaging/playbooks/rol...
> - name: Reconfigure OVN central address\n ^ here\n"}
>
>
>
> attached you'll find the setup logs.
>
>
> best regards,
>
> Markus Schaufler
> ------------------------------
> *Von:* Simone Tiraboschi <stirabos(a)redhat.com>
> *Gesendet:* Donnerstag, 24. Jänner 2019 11:56:50
> *An:* Dominik Holler
> *Cc:* Markus Schaufler; users(a)ovirt.org
> *Betreff:* Re: [ovirt-users] ovirt 4.2 HCI rollout
>
>
>
> On Thu, Jan 24, 2019 at 9:40 AM Dominik Holler <dholler(a)redhat.com> wrote:
>
> On Tue, 22 Jan 2019 11:15:12 +0000
> Markus Schaufler <markus.schaufler(a)digit-all.at> wrote:
>
> > Thanks for your reply,
> >
> > getent ahosts ovirt-hci.res01.ads.ooe.local | cut -d' ' -f1 | uniq
> > 10.1.31.20
> >
> > attached you'll find the logs.
> >
>
> Thanks, to my eyes this looks like a bug.
> I tried to isolate the relevant lines in the attached playbook.
>
> Markus, would you be so kind to check if ovirt-4.2.8 is working for you?
>
>
>
> OK, understood: the real error was just a few lines before what Dominik
> pointed out:
>
> "stdout": "{\"1\": {\"conf_on_shared_storage\": true, \"live-data\":
> true, \"extra\":
> \"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=5792
> (Mon Jan 21 13:57:45
> 2019)\\nhost-id=1\\nscore=3000\\nvm_conf_refresh_time=5792 (Mon Jan 21
> 13:57:45
> 2019)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineStarting\\nstopped=False\\n\",
> \"hostname\": \"HCI01.res01.ads.ooe.local\", \"host-id\": 1,
> \"engine-status\": {\"reason\": \"failed liveliness check\", \"health\":
> \"bad\", \"vm\": \"up\", \"detail\": \"Up\"}, \"score\": 3000, \"stopped\":
> false, \"maintenance\": false, \"crc32\": \"ba303717\",
> \"local_conf_timestamp\": 5792, \"host-ts\": 5792}, \"global_maintenance\":
> false}",
> "stdout_lines": [
> "{\"1\": {\"conf_on_shared_storage\": true, \"live-data\": true,
> \"extra\":
> \"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=5792
> (Mon Jan 21 13:57:45
> 2019)\\nhost-id=1\\nscore=3000\\nvm_conf_refresh_time=5792 (Mon Jan 21
> 13:57:45
> 2019)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineStarting\\nstopped=False\\n\",
> \"hostname\": \"HCI01.res01.ads.ooe.local\", \"host-id\": 1,
> \"engine-status\": {\"reason\": \"failed liveliness check\", \"health\":
> \"bad\", \"vm\": \"up\", \"detail\": \"Up\"}, \"score\": 3000, \"stopped\":
> false, \"maintenance\": false, \"crc32\": \"ba303717\",
> \"local_conf_timestamp\": 5792, \"host-ts\": 5792}, \"global_maintenance\":
> false}"
> ]
> }"
> 2019-01-21 13:57:46,695+0100 ERROR ansible failed {'status': 'FAILED',
> 'ansible_type': 'task', 'ansible_task': u'Check engine VM health',
> 'ansible_result': u'type: <type \'dict\'>\nstr: {\'_ansible_parsed\': True,
> \'stderr_lines\': [], u\'changed\': True, u\'end\': u\'2019-01-21
> 13:57:46.242423\', \'_ansible_no_log\': False, u\'stdout\': u\'{"1":
> {"conf_on_shared_storage": true, "live-data": true, "extra":
> "metadata_parse_version=1\\\\nmetadata_feature_version=1\\\\ntimestamp=5792
> (Mon Jan 21 13:57:4', 'ansible_host': u'localhost', 'ansible_playbook':
> u'/usr/share/ovirt-hosted-engine-setup/ansible/create_target_vm.yml'}
>
> and in particular it's here:
> for some reason we got \"engine-status\": {\"reason\": \"failed
> liveliness check\", \"health\": \"bad\", \"vm\": \"up\", \"detail\":
> \"Up\"}
> over 120 attempts: we have to check engine.log (it got collected as well
> from the engine VM) to understand why the engine was failing to start.
>
>
>
>
> > ________________________________
> > Von: Dominik Holler <dholler(a)redhat.com>
> > Gesendet: Montag, 21. Jänner 2019 17:52:35
> > An: Markus Schaufler
> > Cc: users(a)ovirt.org; Simone Tiraboschi
> > Betreff: Re: [ovirt-users] ovirt 4.2 HCI rollout
> >
> > Would you please share the related ovirt-host-deploy-ansible-*.log
> > stored on the host in /var/log/ovirt-hosted-engine-setup ?
> >
> > Would you please also share the output of
> > getent ahosts YOUR_HOSED_ENGNE_FQDN | cut -d' ' -f1 | uniq
> > if executed on this host?
> >
> >
> > On Mon, 21 Jan 2019 13:37:53 -0000
> > "Markus Schaufler" <markus.schaufler(a)digit-all.at> wrote:
> >
> > > Hi,
> > >
> > > I'm trying a (nested) ovirt 4.2.7 HCI rollout on 3 centos VM's by
> > > following
> > >
> https://ovirt.org/documentation/gluster-hyperconverged/chap-Deploying_Hyp...
> > > gluster deployment was successful but at HE deployment "stage 5" I
> > > got following error:
> > >
> > > [ INFO ] TASK [Reconfigure OVN central address]
> > > [ ERROR ] fatal: [localhost]: FAILED! => {"msg": "The task includes
> > > an option with an undefined variable. The error was: 'dict object'
> > > has no attribute 'stdout_lines'\n\nThe error appears to have been
> > > in
> > > '/usr/share/ovirt-hosted-engine-setup/ansible/create_target_vm.yml':
> > > line 522, column 5, but may\nbe elsewhere in the file depending on
> > > the exact syntax problem.\n\nThe offending line appears to be:\n\n
> > > #
> > >
> https://github.com/oVirt/ovirt-engine/blob/master/packaging/playbooks/rol...
> > > - name: Reconfigure OVN central address\n ^ here\n"}
> > >
> > >
> > > /var/log/messages:
> > > Jan 21 14:09:56 HCI01 journal: ovirt-ha-agent
> > > ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine ERROR
> > > Engine VM stopped on localhost Jan 21 14:10:01 HCI01 systemd:
> > > Started Session 22 of user root. Jan 21 14:10:02 HCI01 systemd:
> > > Started Session c306 of user root. Jan 21 14:10:03 HCI01 systemd:
> > > Started Session c307 of user root. Jan 21 14:10:06 HCI01
> > > vdsm[3650]: WARN executor state: count=5 workers=set([<Worker
> > > name=periodic/4 waiting task#=141 at 0x7fd2d4316910>, <Worker
> > > name=periodic/1 running <Task discardable <Operation
> > > action=<vdsm.virt.sampling.VMBulkstatsMonitor object at
> > > 0x7fd2d4679490> at 0x7fd2d4679710> timeout=7.5, duration=7 at
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> disca rded task#=413 at
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>, <Worker
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>name=periodic/3
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>waiting task#=414 at
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>,
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10><Worker
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>name=periodic/5
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>waiting
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>task#=0
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>at
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>,
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650><Worker
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>name
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>=periodic/2
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>waiting
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>task#=412
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>at
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>])
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>Jan
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>21
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>14:10:06
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>HCI01
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>kernel:
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>ovirtmgmt:
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>port
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>2(vnet0)
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>entered
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>disabled
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>state
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>Jan
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>21
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>14:10:06
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>HCI01
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>kernel:
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>device
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>vnet0
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>left
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>promiscuous
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>mode
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>Jan
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>21
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>14:10:06
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>HCI01
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>kernel:
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>ovirtmgmt:
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>port
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>2(vnet0)
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>entered
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>disabled
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>state
> > > Jan 21 14:10:06 HCI01 NetworkManager[3666]: <info>
> > > [1548076206.9177] device (vnet0): state change: disconnected ->
> > > unmanaged (reason 'unmanaged', sys-iface-state: 'remo ved') Jan 21
> > > 14:10:06 HCI01 NetworkManager[3666]: <info> [1548076206.9180]
> > > device (vnet0): released from master device ovirtmgmt Jan 21
> > > 14:10:06 HCI01 lldpad: recvfrom(Event interface): No buffer space
> > > available Jan 21 14:10:06 HCI01 libvirtd: 2019-01-21
> > > 13:10:06.925+0000: 2651: error : qemuMonitorIORead:609 : Unable to
> > > read from monitor: Connection reset by peer Jan 21 14:10:07 HCI01
> > > kvm: 0 guests now active Jan 21 14:10:07 HCI01 systemd-machined:
> > > Machine qemu-3-HostedEngine terminated. Jan 21 14:10:07 HCI01
> > > libvirtd: 2019-01-21 13:10:07.125+0000: 2704: warning :
> > > qemuGetProcessInfo:1406 : cannot parse process status data Jan 21
> > > 14:10:07 HCI01 libvirtd: 2019-01-21 13:10:07.125+0000: 2704:
> > > warning : qemuGetProcessInfo:1406 : cannot parse process status
> > > data Jan 21 14:10:07 HCI01 libvirtd: 2019-01-21 13:10:07.125+0000:
> > > 2704: warning : qemuGetProcessInfo:1406 : cannot parse process
> > > status data Jan 21 14:10:07 HCI01 libvirtd: 2019-01-21
> > > 13:10:07.125+0000: 2704: warning : qemuGetProcessInfo:1406 : cannot
> > > parse process status data Jan 21 14:10:07 HCI01 libvirtd:
> > > 2019-01-21 13:10:07.126+0000: 2704: error :
> > > virNetDevTapInterfaceStats:764 : internal error: /proc/net/dev:
> > > Interface not found Jan 21 14:10:07 HCI01 firewalld[24040]:
> > > WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -D libvirt-out
> > > -m physdev --physdev-is-bridged --physdev-out vnet0 -g FP-vnet 0'
> > > failed: iptables v1.4.21: goto 'FP-vnet0' is not a chain#012#012Try
> > > `iptables -h' or 'iptables --help' for more information. Jan 21
> > > 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/iptables -w2 -w -D libvirt-out -m physdev --physdev-out
> > > vnet0 -g FP-vnet0' failed: iptables v 1.4.21: goto 'FP-vnet0' is
> > > not a chain#012#012Try `iptables -h' or 'iptables --help' for more
> > > information. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -D libvirt-in -m physdev
> > > --physdev-in vnet0 -g FJ-vnet0' failed: iptables v1. 4.21: goto
> > > 'FJ-vnet0' is not a chain#012#012Try `iptables -h' or 'iptables
> > > --help' for more information. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -D libvirt-host-in -m physdev --physdev-in vnet0 -g HJ-vnet0'
> > > failed: iptable s v1.4.21: goto 'HJ-vnet0' is not a
> > > chain#012#012Try `iptables -h' or 'iptables --help' for more
> > > information. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F FP-vnet0' failed:
> > > iptables: No chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -X FP-vnet0' failed: iptables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F FJ-vnet0' failed:
> > > iptables: No chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -X FJ-vnet0' failed: iptables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F HJ-vnet0' failed:
> > > iptables: No chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -X HJ-vnet0' failed: iptables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-out -m
> > > physdev --physdev-is-bridged --physdev-out vnet0 -g FP-vne t0'
> > > failed: ip6tables v1.4.21: goto 'FP-vnet0' is not a
> > > chain#012#012Try `ip6tables -h' or 'ip6tables --help' for more
> > > information. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-out -m
> > > physdev --physdev-out vnet0 -g FP-vnet0' failed: ip6tables v1.4.21:
> > > goto 'FP-vnet0' is not a chain#012#012Try `ip6tables -h' or
> > > 'ip6tables --help' for more information. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> > > -w -D libvirt-in -m physdev --physdev-in vnet0 -g FJ-vnet0' failed:
> > > ip6tables v 1.4.21: goto 'FJ-vnet0' is not a chain#012#012Try
> > > `ip6tables -h' or 'ip6tables --help' for more information. Jan 21
> > > 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ip6tables -w2 -w -D libvirt-host-in -m physdev
> > > --physdev-in vnet0 -g HJ-vnet0' failed: ip6tab les v1.4.21: goto
> > > 'HJ-vnet0' is not a chain#012#012Try `ip6tables -h' or 'ip6tables
> > > --help' for more information. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> > > -w -F FP-vnet0' failed: ip6tables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -X FP-vnet0' failed:
> > > ip6tables: No chain/target/match by that name. Jan 21 14:10:07
> > > HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ip6tables -w2 -w -F FJ-vnet0' failed: ip6tables: No
> > > chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> > > -w -X FJ-vnet0' failed: ip6tables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -F HJ-vnet0' failed:
> > > ip6tables: No chain/target/match by that name. Jan 21 14:10:07
> > > HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ip6tables -w2 -w -X HJ-vnet0' failed: ip6tables: No
> > > chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> > > --concurrent -t nat -D PREROUTING -i vnet0 -j libvirt-J-vnet0'
> > > failed: Illegal target name 'libvirt-J-vnet0'. Jan 21 14:10:07
> > > HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ebtables --concurrent -t nat -D POSTROUTING -o vnet0 -j
> > > libvirt-P-vnet0' failed: Illegal target name 'libvirt-P-vnet0'. Jan
> > > 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ebtables --concurrent -t nat -D PREROUTING -i vnet0 -j
> > > libvirt-J-vnet0' failed: Illegal targe t name 'libvirt-J-vnet0'.
> > > Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ebtables --concurrent -t nat -D POSTROUTING -o vnet0 -j
> > > libvirt-P-vnet0' failed: Illegal targ et name 'libvirt-P-vnet0'.
> > > Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ebtables --concurrent -t nat -L libvirt-J-vnet0' failed:
> > > Chain 'libvirt-J-vnet0' doesn't exis t. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> > > --concurrent -t nat -L libvirt-P-vnet0' failed: Chain
> > > 'libvirt-P-vnet0' doesn't exist. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> > > --concurrent -t nat -F libvirt-J-vnet0' failed: Chain
> > > 'libvirt-J-vnet0' doesn't exist. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> > > --concurrent -t nat -X libvirt-J-vnet0' failed: Chain
> > > 'libvirt-J-vnet0' doesn't exist. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> > > --concurrent -t nat -F libvirt-P-vnet0' failed: Chain
> > > 'libvirt-P-vnet0' doesn't exist. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> > > --concurrent -t nat -X libvirt-P-vnet0' failed: Chain
> > > 'libvirt-P-vnet0' doesn't exist. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -D libvirt-out -m physdev --physdev-is-bridged --physdev-out
> > > vnet0 -g FO-vnet0' failed: iptables v1.4.21: goto 'FO-vnet0' is not
> > > a chain#012#012Try `iptables -h' or 'iptables --help' for more
> > > information. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -D libvirt-out -m
> > > physdev --physdev-out vnet0 -g FO-vnet0' failed: iptables v1.4.21:
> > > goto 'FO-vnet0' is not a chain#012#012Try `iptables -h' or
> > > 'iptables --help' for more information. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -D libvirt-in -m physdev --physdev-in vnet0 -g FI-vnet0' failed:
> > > iptables v1.4.21: goto 'FI-vnet0' is not a chain#012#012Try
> > > `iptables -h' or 'iptables --help' for more information. Jan 21
> > > 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/iptables -w2 -w -D libvirt-host-in -m physdev
> > > --physdev-in vnet0 -g HI-vnet0' failed: iptables v1.4.21: goto
> > > 'HI-vnet0' is not a chain#012#012Try `iptables -h' or 'iptables
> > > --help' for more information. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -D libvirt-in-post -m physdev --physdev-in vnet0 -j ACCEPT'
> > > failed: iptables: Bad rule (does a matching rule exist in that
> > > chain?). Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F FO-vnet0' failed:
> > > iptables: No chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -X FO-vnet0' failed: iptables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F FI-vnet0' failed:
> > > iptables: No chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -X FI-vnet0' failed: iptables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F HI-vnet0' failed:
> > > iptables: No chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -X HI-vnet0' failed: iptables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-out -m
> > > physdev --physdev-is-bridged --physdev-out vnet0 -g FO-vnet0'
> > > failed: ip6tables v1.4.21: goto 'FO-vnet0' is not a
> > > chain#012#012Try `ip6tables -h' or 'ip6tables --help' for more
> > > information. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-out -m
> > > physdev --physdev-out vnet0 -g FO-vnet0' failed: ip6tables v1.4.21:
> > > goto 'FO-vnet0' is not a chain#012#012Try `ip6tables -h' or
> > > 'ip6tables --help' for more information. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> > > -w -D libvirt-in -m physdev --physdev-in vnet0 -g FI-vnet0' failed:
> > > ip6tables v1.4.21: goto 'FI-vnet0' is not a chain#012#012Try
> > > `ip6tables -h' or 'ip6tables --help' for more information. Jan 21
> > > 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ip6tables -w2 -w -D libvirt-host-in -m physdev
> > > --physdev-in vnet0 -g HI-vnet0' failed: ip6tables v1.4.21: goto
> > > 'HI-vnet0' is not a chain#012#012Try `ip6tables -h' or 'ip6tables
> > > --help' for more information. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> > > -w -D libvirt-in-post -m physdev --physdev-in vnet0 -j ACCEPT'
> > > failed: ip6tables: Bad rule (does a matching rule exist in that
> > > chain?). Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -F FO-vnet0' failed:
> > > ip6tables: No chain/target/match by that name. Jan 21 14:10:07
> > > HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ip6tables -w2 -w -X FO-vnet0' failed: ip6tables: No
> > > chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> > > -w -F FI-vnet0' failed: ip6tables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -X FI-vnet0' failed:
> > > ip6tables: No chain/target/match by that name. Jan 21 14:10:07
> > > HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ip6tables -w2 -w -F HI-vnet0' failed: ip6tables: No
> > > chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> > > -w -X HI-vnet0' failed: ip6tables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -D
> > > POSTROUTING -o vnet0 -j libvirt-O-vnet0' failed: Illegal target
> > > name 'libvirt-O-vnet0'. Jan 21 14:10:07 HCI01 firewalld[24040]:
> > > WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -L
> > > libvirt-O-vnet0' failed: Chain 'libvirt-O-vnet0' doesn't exist. Jan
> > > 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ebtables --concurrent -t nat -F libvirt-O-vnet0' failed:
> > > Chain 'libvirt-O-vnet0' doesn't exist. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> > > --concurrent -t nat -X libvirt-O-vnet0' failed: Chain
> > > 'libvirt-O-vnet0' doesn't exist. Jan 21 14:10:07 HCI01 vdsm[3650]:
> > > WARN
> > > File:
> /var/lib/libvirt/qemu/channels/ea1b312c-a462-45a9-ab75-78008bc4c9c9.ovirt-guest-agent.0
> > > already removed Jan 21 14:10:07 HCI01 vdsm[3650]: WARN Attempting
> > > to remove a non existing network:
> > > ovirtmgmt/ea1b312c-a462-45a9-ab75-78008bc4c9c9 Jan 21 14:10:07
> > > HCI01 vdsm[3650]: WARN Attempting to remove a non existing net
> > > user: ovirtmgmt/ea1b312c-a462-45a9-ab75-78008bc4c9c9 Jan 21
> > > 14:10:07 HCI01 vdsm[3650]: WARN
> > > File:
> /var/lib/libvirt/qemu/channels/ea1b312c-a462-45a9-ab75-78008bc4c9c9.org.qemu.guest_agent.0
> > > already removed
> > >
> > > any ideas on that?
> > > _______________________________________________
> > > Users mailing list -- users(a)ovirt.org
> > > To unsubscribe send an email to users-leave(a)ovirt.org
> > > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> > > oVirt Code of Conduct:
> > > https://www.ovirt.org/community/about/community-guidelines/ List
> > > Archives:
> > >
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/XMMX5CY6VHF...
> >
>
>
6 years, 3 months
Re: ovirt 4.2 HCI rollout
by Simone Tiraboschi
On Thu, Jan 24, 2019 at 2:21 PM Markus Schaufler <
markus.schaufler(a)digit-all.at> wrote:
> Hi,
>
>
> thanks for the replies.
>
>
> I updated to 4.2.8 and tried again:
>
>
> [ INFO ] TASK [Check engine VM health]
> [ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 120, "changed":
> true, "cmd": ["hosted-engine", "--vm-status", "--json"], "delta":
> "0:00:00.165316", "end": "2019-01-24 14:12:06.899564", "rc": 0, "start":
> "2019-01-24 14:12:06.734248", "stderr": "", "stderr_lines": [], "stdout":
> "{\"1\": {\"conf_on_shared_storage\": true, \"live-data\": true, \"extra\":
> \"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=3049
> (Thu Jan 24 14:11:59
> 2019)\\nhost-id=1\\nscore=3400\\nvm_conf_refresh_time=3049 (Thu Jan 24
> 14:11:59
> 2019)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineStarting\\nstopped=False\\n\",
> \"hostname\": \"HCI01.res01.ads.ooe.local\", \"host-id\": 1,
> \"engine-status\": {\"reason\": \"failed liveliness check\", \"health\":
> \"bad\", \"vm\": \"up\", \"detail\": \"Up\"}, \"score\": 3400, \"stopped\":
> false, \"maintenance\": false, \"crc32\": \"0c1a3ddb\",
> \"local_conf_timestamp\": 3049, \"host-ts\": 3049}, \"global_maintenance\":
> false}", "stdout_lines": ["{\"1\": {\"conf_on_shared_storage\": true,
> \"live-data\": true, \"extra\":
> \"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=3049
> (Thu Jan 24 14:11:59
> 2019)\\nhost-id=1\\nscore=3400\\nvm_conf_refresh_time=3049 (Thu Jan 24
> 14:11:59
> 2019)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineStarting\\nstopped=False\\n\",
> \"hostname\": \"HCI01.res01.ads.ooe.local\", \"host-id\": 1,
> \"engine-status\": {\"reason\": \"failed liveliness check\", \"health\":
> \"bad\", \"vm\": \"up\", \"detail\": \"Up\"}, \"score\": 3400, \"stopped\":
> false, \"maintenance\": false, \"crc32\": \"0c1a3ddb\",
> \"local_conf_timestamp\": 3049, \"host-ts\": 3049}, \"global_maintenance\":
> false}"]}
>
It's still the same issue: the host fail to properly check the status of
the engine over a dedicate health page.
You should connect to ovirt-hci.res01.ads.ooe.local and check the status of
ovirt-engine service and /var/log/ovirt-engine/engine.log there.
> [ INFO ] TASK [Check VM status at virt level]
> [ INFO ] changed: [localhost]
> [ INFO ] TASK [Fail if engine VM is not running]
> [ INFO ] skipping: [localhost]
> [ INFO ] TASK [Get target engine VM IPv4 address]
> [ INFO ] changed: [localhost]
> [ INFO ] TASK [Get VDSM's target engine VM stats]
> [ INFO ] changed: [localhost]
> [ INFO ] TASK [Convert stats to JSON format]
> [ INFO ] ok: [localhost]
> [ INFO ] TASK [Get target engine VM IPv4 address from VDSM stats]
> [ INFO ] ok: [localhost]
> [ INFO ] TASK [Fail if the Engine has no IP address]
> [ INFO ] skipping: [localhost]
> [ INFO ] TASK [Fail if Engine IP is different from engine's FQDN resolved
> IP]
> [ INFO ] skipping: [localhost]
> [ INFO ] TASK [Get target engine VM IPv4 address]
> [ INFO ] skipping: [localhost]
> [ INFO ] TASK [Reconfigure OVN central address]
> [ ERROR ] fatal: [localhost]: FAILED! => {"msg": "The task includes an
> option with an undefined variable. The error was: 'dict object' has no
> attribute 'stdout_lines'\n\nThe error appears to have been in
> '/usr/share/ovirt-hosted-engine-setup/ansible/create_target_vm.yml': line
> 518, column 5, but may\nbe elsewhere in the file depending on the exact
> syntax problem.\n\nThe offending line appears to be:\n\n #
> https://github.com/oVirt/ovirt-engine/blob/master/packaging/playbooks/rol...
> - name: Reconfigure OVN central address\n ^ here\n"}
>
>
>
> attached you'll find the setup logs.
>
>
> best regards,
>
> Markus Schaufler
> ------------------------------
> *Von:* Simone Tiraboschi <stirabos(a)redhat.com>
> *Gesendet:* Donnerstag, 24. Jänner 2019 11:56:50
> *An:* Dominik Holler
> *Cc:* Markus Schaufler; users(a)ovirt.org
> *Betreff:* Re: [ovirt-users] ovirt 4.2 HCI rollout
>
>
>
> On Thu, Jan 24, 2019 at 9:40 AM Dominik Holler <dholler(a)redhat.com> wrote:
>
> On Tue, 22 Jan 2019 11:15:12 +0000
> Markus Schaufler <markus.schaufler(a)digit-all.at> wrote:
>
> > Thanks for your reply,
> >
> > getent ahosts ovirt-hci.res01.ads.ooe.local | cut -d' ' -f1 | uniq
> > 10.1.31.20
> >
> > attached you'll find the logs.
> >
>
> Thanks, to my eyes this looks like a bug.
> I tried to isolate the relevant lines in the attached playbook.
>
> Markus, would you be so kind to check if ovirt-4.2.8 is working for you?
>
>
>
> OK, understood: the real error was just a few lines before what Dominik
> pointed out:
>
> "stdout": "{\"1\": {\"conf_on_shared_storage\": true, \"live-data\":
> true, \"extra\":
> \"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=5792
> (Mon Jan 21 13:57:45
> 2019)\\nhost-id=1\\nscore=3000\\nvm_conf_refresh_time=5792 (Mon Jan 21
> 13:57:45
> 2019)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineStarting\\nstopped=False\\n\",
> \"hostname\": \"HCI01.res01.ads.ooe.local\", \"host-id\": 1,
> \"engine-status\": {\"reason\": \"failed liveliness check\", \"health\":
> \"bad\", \"vm\": \"up\", \"detail\": \"Up\"}, \"score\": 3000, \"stopped\":
> false, \"maintenance\": false, \"crc32\": \"ba303717\",
> \"local_conf_timestamp\": 5792, \"host-ts\": 5792}, \"global_maintenance\":
> false}",
> "stdout_lines": [
> "{\"1\": {\"conf_on_shared_storage\": true, \"live-data\": true,
> \"extra\":
> \"metadata_parse_version=1\\nmetadata_feature_version=1\\ntimestamp=5792
> (Mon Jan 21 13:57:45
> 2019)\\nhost-id=1\\nscore=3000\\nvm_conf_refresh_time=5792 (Mon Jan 21
> 13:57:45
> 2019)\\nconf_on_shared_storage=True\\nmaintenance=False\\nstate=EngineStarting\\nstopped=False\\n\",
> \"hostname\": \"HCI01.res01.ads.ooe.local\", \"host-id\": 1,
> \"engine-status\": {\"reason\": \"failed liveliness check\", \"health\":
> \"bad\", \"vm\": \"up\", \"detail\": \"Up\"}, \"score\": 3000, \"stopped\":
> false, \"maintenance\": false, \"crc32\": \"ba303717\",
> \"local_conf_timestamp\": 5792, \"host-ts\": 5792}, \"global_maintenance\":
> false}"
> ]
> }"
> 2019-01-21 13:57:46,695+0100 ERROR ansible failed {'status': 'FAILED',
> 'ansible_type': 'task', 'ansible_task': u'Check engine VM health',
> 'ansible_result': u'type: <type \'dict\'>\nstr: {\'_ansible_parsed\': True,
> \'stderr_lines\': [], u\'changed\': True, u\'end\': u\'2019-01-21
> 13:57:46.242423\', \'_ansible_no_log\': False, u\'stdout\': u\'{"1":
> {"conf_on_shared_storage": true, "live-data": true, "extra":
> "metadata_parse_version=1\\\\nmetadata_feature_version=1\\\\ntimestamp=5792
> (Mon Jan 21 13:57:4', 'ansible_host': u'localhost', 'ansible_playbook':
> u'/usr/share/ovirt-hosted-engine-setup/ansible/create_target_vm.yml'}
>
> and in particular it's here:
> for some reason we got \"engine-status\": {\"reason\": \"failed
> liveliness check\", \"health\": \"bad\", \"vm\": \"up\", \"detail\":
> \"Up\"}
> over 120 attempts: we have to check engine.log (it got collected as well
> from the engine VM) to understand why the engine was failing to start.
>
>
>
>
> > ________________________________
> > Von: Dominik Holler <dholler(a)redhat.com>
> > Gesendet: Montag, 21. Jänner 2019 17:52:35
> > An: Markus Schaufler
> > Cc: users(a)ovirt.org; Simone Tiraboschi
> > Betreff: Re: [ovirt-users] ovirt 4.2 HCI rollout
> >
> > Would you please share the related ovirt-host-deploy-ansible-*.log
> > stored on the host in /var/log/ovirt-hosted-engine-setup ?
> >
> > Would you please also share the output of
> > getent ahosts YOUR_HOSED_ENGNE_FQDN | cut -d' ' -f1 | uniq
> > if executed on this host?
> >
> >
> > On Mon, 21 Jan 2019 13:37:53 -0000
> > "Markus Schaufler" <markus.schaufler(a)digit-all.at> wrote:
> >
> > > Hi,
> > >
> > > I'm trying a (nested) ovirt 4.2.7 HCI rollout on 3 centos VM's by
> > > following
> > >
> https://ovirt.org/documentation/gluster-hyperconverged/chap-Deploying_Hyp...
> > > gluster deployment was successful but at HE deployment "stage 5" I
> > > got following error:
> > >
> > > [ INFO ] TASK [Reconfigure OVN central address]
> > > [ ERROR ] fatal: [localhost]: FAILED! => {"msg": "The task includes
> > > an option with an undefined variable. The error was: 'dict object'
> > > has no attribute 'stdout_lines'\n\nThe error appears to have been
> > > in
> > > '/usr/share/ovirt-hosted-engine-setup/ansible/create_target_vm.yml':
> > > line 522, column 5, but may\nbe elsewhere in the file depending on
> > > the exact syntax problem.\n\nThe offending line appears to be:\n\n
> > > #
> > >
> https://github.com/oVirt/ovirt-engine/blob/master/packaging/playbooks/rol...
> > > - name: Reconfigure OVN central address\n ^ here\n"}
> > >
> > >
> > > /var/log/messages:
> > > Jan 21 14:09:56 HCI01 journal: ovirt-ha-agent
> > > ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine ERROR
> > > Engine VM stopped on localhost Jan 21 14:10:01 HCI01 systemd:
> > > Started Session 22 of user root. Jan 21 14:10:02 HCI01 systemd:
> > > Started Session c306 of user root. Jan 21 14:10:03 HCI01 systemd:
> > > Started Session c307 of user root. Jan 21 14:10:06 HCI01
> > > vdsm[3650]: WARN executor state: count=5 workers=set([<Worker
> > > name=periodic/4 waiting task#=141 at 0x7fd2d4316910>, <Worker
> > > name=periodic/1 running <Task discardable <Operation
> > > action=<vdsm.virt.sampling.VMBulkstatsMonitor object at
> > > 0x7fd2d4679490> at 0x7fd2d4679710> timeout=7.5, duration=7 at
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> disca rded task#=413 at
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>, <Worker
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>name=periodic/3
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>waiting task#=414 at
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>,
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10><Worker
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>name=periodic/5
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>waiting
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>task#=0
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0> 0x7fd2d5ed0510>0x7fd2d5ed0b10>at
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>,
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650><Worker
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>name
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>=periodic/2
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>waiting
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>task#=412
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>at
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>])
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>Jan
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>21
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>14:10:06
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>HCI01
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>kernel:
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>ovirtmgmt:
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>port
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>2(vnet0)
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>entered
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>disabled
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>state
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>Jan
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>21
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>14:10:06
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>HCI01
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>kernel:
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>device
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>vnet0
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>left
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>promiscuous
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>mode
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>Jan
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>21
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>14:10:06
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>HCI01
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>kernel:
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>ovirtmgmt:
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>port
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>2(vnet0)
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>entered
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>disabled
> > > 0x7fd2d4679490> 0x7fd33c1e0ed0>
> 0x7fd2d5ed0510>0x7fd2d5ed0b10>0x7fd2d425f650>0x7fd2d5ed07d0>state
> > > Jan 21 14:10:06 HCI01 NetworkManager[3666]: <info>
> > > [1548076206.9177] device (vnet0): state change: disconnected ->
> > > unmanaged (reason 'unmanaged', sys-iface-state: 'remo ved') Jan 21
> > > 14:10:06 HCI01 NetworkManager[3666]: <info> [1548076206.9180]
> > > device (vnet0): released from master device ovirtmgmt Jan 21
> > > 14:10:06 HCI01 lldpad: recvfrom(Event interface): No buffer space
> > > available Jan 21 14:10:06 HCI01 libvirtd: 2019-01-21
> > > 13:10:06.925+0000: 2651: error : qemuMonitorIORead:609 : Unable to
> > > read from monitor: Connection reset by peer Jan 21 14:10:07 HCI01
> > > kvm: 0 guests now active Jan 21 14:10:07 HCI01 systemd-machined:
> > > Machine qemu-3-HostedEngine terminated. Jan 21 14:10:07 HCI01
> > > libvirtd: 2019-01-21 13:10:07.125+0000: 2704: warning :
> > > qemuGetProcessInfo:1406 : cannot parse process status data Jan 21
> > > 14:10:07 HCI01 libvirtd: 2019-01-21 13:10:07.125+0000: 2704:
> > > warning : qemuGetProcessInfo:1406 : cannot parse process status
> > > data Jan 21 14:10:07 HCI01 libvirtd: 2019-01-21 13:10:07.125+0000:
> > > 2704: warning : qemuGetProcessInfo:1406 : cannot parse process
> > > status data Jan 21 14:10:07 HCI01 libvirtd: 2019-01-21
> > > 13:10:07.125+0000: 2704: warning : qemuGetProcessInfo:1406 : cannot
> > > parse process status data Jan 21 14:10:07 HCI01 libvirtd:
> > > 2019-01-21 13:10:07.126+0000: 2704: error :
> > > virNetDevTapInterfaceStats:764 : internal error: /proc/net/dev:
> > > Interface not found Jan 21 14:10:07 HCI01 firewalld[24040]:
> > > WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -D libvirt-out
> > > -m physdev --physdev-is-bridged --physdev-out vnet0 -g FP-vnet 0'
> > > failed: iptables v1.4.21: goto 'FP-vnet0' is not a chain#012#012Try
> > > `iptables -h' or 'iptables --help' for more information. Jan 21
> > > 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/iptables -w2 -w -D libvirt-out -m physdev --physdev-out
> > > vnet0 -g FP-vnet0' failed: iptables v 1.4.21: goto 'FP-vnet0' is
> > > not a chain#012#012Try `iptables -h' or 'iptables --help' for more
> > > information. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -D libvirt-in -m physdev
> > > --physdev-in vnet0 -g FJ-vnet0' failed: iptables v1. 4.21: goto
> > > 'FJ-vnet0' is not a chain#012#012Try `iptables -h' or 'iptables
> > > --help' for more information. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -D libvirt-host-in -m physdev --physdev-in vnet0 -g HJ-vnet0'
> > > failed: iptable s v1.4.21: goto 'HJ-vnet0' is not a
> > > chain#012#012Try `iptables -h' or 'iptables --help' for more
> > > information. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F FP-vnet0' failed:
> > > iptables: No chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -X FP-vnet0' failed: iptables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F FJ-vnet0' failed:
> > > iptables: No chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -X FJ-vnet0' failed: iptables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F HJ-vnet0' failed:
> > > iptables: No chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -X HJ-vnet0' failed: iptables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-out -m
> > > physdev --physdev-is-bridged --physdev-out vnet0 -g FP-vne t0'
> > > failed: ip6tables v1.4.21: goto 'FP-vnet0' is not a
> > > chain#012#012Try `ip6tables -h' or 'ip6tables --help' for more
> > > information. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-out -m
> > > physdev --physdev-out vnet0 -g FP-vnet0' failed: ip6tables v1.4.21:
> > > goto 'FP-vnet0' is not a chain#012#012Try `ip6tables -h' or
> > > 'ip6tables --help' for more information. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> > > -w -D libvirt-in -m physdev --physdev-in vnet0 -g FJ-vnet0' failed:
> > > ip6tables v 1.4.21: goto 'FJ-vnet0' is not a chain#012#012Try
> > > `ip6tables -h' or 'ip6tables --help' for more information. Jan 21
> > > 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ip6tables -w2 -w -D libvirt-host-in -m physdev
> > > --physdev-in vnet0 -g HJ-vnet0' failed: ip6tab les v1.4.21: goto
> > > 'HJ-vnet0' is not a chain#012#012Try `ip6tables -h' or 'ip6tables
> > > --help' for more information. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> > > -w -F FP-vnet0' failed: ip6tables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -X FP-vnet0' failed:
> > > ip6tables: No chain/target/match by that name. Jan 21 14:10:07
> > > HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ip6tables -w2 -w -F FJ-vnet0' failed: ip6tables: No
> > > chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> > > -w -X FJ-vnet0' failed: ip6tables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -F HJ-vnet0' failed:
> > > ip6tables: No chain/target/match by that name. Jan 21 14:10:07
> > > HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ip6tables -w2 -w -X HJ-vnet0' failed: ip6tables: No
> > > chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> > > --concurrent -t nat -D PREROUTING -i vnet0 -j libvirt-J-vnet0'
> > > failed: Illegal target name 'libvirt-J-vnet0'. Jan 21 14:10:07
> > > HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ebtables --concurrent -t nat -D POSTROUTING -o vnet0 -j
> > > libvirt-P-vnet0' failed: Illegal target name 'libvirt-P-vnet0'. Jan
> > > 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ebtables --concurrent -t nat -D PREROUTING -i vnet0 -j
> > > libvirt-J-vnet0' failed: Illegal targe t name 'libvirt-J-vnet0'.
> > > Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ebtables --concurrent -t nat -D POSTROUTING -o vnet0 -j
> > > libvirt-P-vnet0' failed: Illegal targ et name 'libvirt-P-vnet0'.
> > > Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ebtables --concurrent -t nat -L libvirt-J-vnet0' failed:
> > > Chain 'libvirt-J-vnet0' doesn't exis t. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> > > --concurrent -t nat -L libvirt-P-vnet0' failed: Chain
> > > 'libvirt-P-vnet0' doesn't exist. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> > > --concurrent -t nat -F libvirt-J-vnet0' failed: Chain
> > > 'libvirt-J-vnet0' doesn't exist. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> > > --concurrent -t nat -X libvirt-J-vnet0' failed: Chain
> > > 'libvirt-J-vnet0' doesn't exist. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> > > --concurrent -t nat -F libvirt-P-vnet0' failed: Chain
> > > 'libvirt-P-vnet0' doesn't exist. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> > > --concurrent -t nat -X libvirt-P-vnet0' failed: Chain
> > > 'libvirt-P-vnet0' doesn't exist. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -D libvirt-out -m physdev --physdev-is-bridged --physdev-out
> > > vnet0 -g FO-vnet0' failed: iptables v1.4.21: goto 'FO-vnet0' is not
> > > a chain#012#012Try `iptables -h' or 'iptables --help' for more
> > > information. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -D libvirt-out -m
> > > physdev --physdev-out vnet0 -g FO-vnet0' failed: iptables v1.4.21:
> > > goto 'FO-vnet0' is not a chain#012#012Try `iptables -h' or
> > > 'iptables --help' for more information. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -D libvirt-in -m physdev --physdev-in vnet0 -g FI-vnet0' failed:
> > > iptables v1.4.21: goto 'FI-vnet0' is not a chain#012#012Try
> > > `iptables -h' or 'iptables --help' for more information. Jan 21
> > > 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/iptables -w2 -w -D libvirt-host-in -m physdev
> > > --physdev-in vnet0 -g HI-vnet0' failed: iptables v1.4.21: goto
> > > 'HI-vnet0' is not a chain#012#012Try `iptables -h' or 'iptables
> > > --help' for more information. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -D libvirt-in-post -m physdev --physdev-in vnet0 -j ACCEPT'
> > > failed: iptables: Bad rule (does a matching rule exist in that
> > > chain?). Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F FO-vnet0' failed:
> > > iptables: No chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -X FO-vnet0' failed: iptables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F FI-vnet0' failed:
> > > iptables: No chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -X FI-vnet0' failed: iptables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F HI-vnet0' failed:
> > > iptables: No chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2
> > > -w -X HI-vnet0' failed: iptables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-out -m
> > > physdev --physdev-is-bridged --physdev-out vnet0 -g FO-vnet0'
> > > failed: ip6tables v1.4.21: goto 'FO-vnet0' is not a
> > > chain#012#012Try `ip6tables -h' or 'ip6tables --help' for more
> > > information. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-out -m
> > > physdev --physdev-out vnet0 -g FO-vnet0' failed: ip6tables v1.4.21:
> > > goto 'FO-vnet0' is not a chain#012#012Try `ip6tables -h' or
> > > 'ip6tables --help' for more information. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> > > -w -D libvirt-in -m physdev --physdev-in vnet0 -g FI-vnet0' failed:
> > > ip6tables v1.4.21: goto 'FI-vnet0' is not a chain#012#012Try
> > > `ip6tables -h' or 'ip6tables --help' for more information. Jan 21
> > > 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ip6tables -w2 -w -D libvirt-host-in -m physdev
> > > --physdev-in vnet0 -g HI-vnet0' failed: ip6tables v1.4.21: goto
> > > 'HI-vnet0' is not a chain#012#012Try `ip6tables -h' or 'ip6tables
> > > --help' for more information. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> > > -w -D libvirt-in-post -m physdev --physdev-in vnet0 -j ACCEPT'
> > > failed: ip6tables: Bad rule (does a matching rule exist in that
> > > chain?). Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -F FO-vnet0' failed:
> > > ip6tables: No chain/target/match by that name. Jan 21 14:10:07
> > > HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ip6tables -w2 -w -X FO-vnet0' failed: ip6tables: No
> > > chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> > > -w -F FI-vnet0' failed: ip6tables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -X FI-vnet0' failed:
> > > ip6tables: No chain/target/match by that name. Jan 21 14:10:07
> > > HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ip6tables -w2 -w -F HI-vnet0' failed: ip6tables: No
> > > chain/target/match by that name. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2
> > > -w -X HI-vnet0' failed: ip6tables: No chain/target/match by that
> > > name. Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING:
> > > COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -D
> > > POSTROUTING -o vnet0 -j libvirt-O-vnet0' failed: Illegal target
> > > name 'libvirt-O-vnet0'. Jan 21 14:10:07 HCI01 firewalld[24040]:
> > > WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -L
> > > libvirt-O-vnet0' failed: Chain 'libvirt-O-vnet0' doesn't exist. Jan
> > > 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED:
> > > '/usr/sbin/ebtables --concurrent -t nat -F libvirt-O-vnet0' failed:
> > > Chain 'libvirt-O-vnet0' doesn't exist. Jan 21 14:10:07 HCI01
> > > firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables
> > > --concurrent -t nat -X libvirt-O-vnet0' failed: Chain
> > > 'libvirt-O-vnet0' doesn't exist. Jan 21 14:10:07 HCI01 vdsm[3650]:
> > > WARN
> > > File:
> /var/lib/libvirt/qemu/channels/ea1b312c-a462-45a9-ab75-78008bc4c9c9.ovirt-guest-agent.0
> > > already removed Jan 21 14:10:07 HCI01 vdsm[3650]: WARN Attempting
> > > to remove a non existing network:
> > > ovirtmgmt/ea1b312c-a462-45a9-ab75-78008bc4c9c9 Jan 21 14:10:07
> > > HCI01 vdsm[3650]: WARN Attempting to remove a non existing net
> > > user: ovirtmgmt/ea1b312c-a462-45a9-ab75-78008bc4c9c9 Jan 21
> > > 14:10:07 HCI01 vdsm[3650]: WARN
> > > File:
> /var/lib/libvirt/qemu/channels/ea1b312c-a462-45a9-ab75-78008bc4c9c9.org.qemu.guest_agent.0
> > > already removed
> > >
> > > any ideas on that?
> > > _______________________________________________
> > > Users mailing list -- users(a)ovirt.org
> > > To unsubscribe send an email to users-leave(a)ovirt.org
> > > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> > > oVirt Code of Conduct:
> > > https://www.ovirt.org/community/about/community-guidelines/ List
> > > Archives:
> > >
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/XMMX5CY6VHF...
> >
>
>
6 years, 3 months
lvm problem
by Nyika Csaba
Hi all,
I have an Ovirt 4.2.8 cluster. Nodes are 4.2 ovirt nodes and the volumes (5) attached to nodes by FC.
2 weeks ago i made a small vm (centos 7 based) for me to test (name A). After the test i dropped a vm.
Next day a made an another vm (named B) to the developes and tired to add a new disk to the vm (B). Than a original volume group (vg) of the vm (B) missed and i got back the yesterday dropped vm`s (A) vg!
I tired to restart a vm, but the vm never started again.
I dropped this vm (B) too, and tired to add a new disk an older running vm (C) too, but a volume group changed to that vm`s vg what i dropped before (B).
I checked this „error” and i got it ,then i delete or move disks from the end of the FC volume.
Have sombody ever seen error like this?
Thanks,
csaba
Ps: in this cluster a managed 120 productive vm, so….
6 years, 3 months
Lvm problem
by csabany@freemail.hu
Hi all,
I have an Ovirt 4.2.8 cluster. Nodes are 4.2 ovirt nodes and the volumes (5) attached to nodes by FC.
2 weeks ago i made a small vm (centos 7 based) for me to test (name A). After the test i dropped a vm.
Next day a made an another vm (named B) to the developes and tired to add a new disk to the vm (B). Than a original volume group (vg) of the vm (B) missed and i got back the yesterday dropped vm`s (A) vg!
I tired to restart a vm, but the vm never started again.
I dropped this vm (B) too, and tired to add a new disk an older running vm (C) too, but a volume group changed to that vm`s vg what i dropped before (B).
I checked this „error” and i got it ,then i delete or move disks from the end of the FC volume.
Have sombody ever seen error like this?
Thanks,
csaba
Ps: in this cluster a managed 120 productive vm, so….
6 years, 3 months
latest pycurl 7.43 brokes ovirtsdk4
by Nathanaël Blanchet
Hi all,
If anyone uses latest pycurl 7.43 provided by pip or ansible tower/awx,
any ovirtsdk4 calling will issue with the log:
The full traceback is:
WARNING: The below traceback may *not* be related to the actual failure.
File "/tmp/ansible_ovirt_auth_payload_L1HK9E/__main__.py", line 202,
in <module>
import ovirtsdk4 as sdk
File
"/opt/awx/embedded/lib64/python2.7/site-packages/ovirtsdk4/__init__.py",
line 22, in <module>
import pycurl
fatal: [localhost]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"ca_file": null,
"compress": true,
"headers": null,
"hostname": null,
"insecure": true,
"kerberos": false,
"ovirt_auth": null,
"password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"state": "present",
"timeout": 0,
"token": null,
"url": "https://acore.v100.abes.fr/ovirt-engine/api",
"username": "admin@internal"
}
},
"msg": "ovirtsdk4 version 4.2.4 or higher is required for this module"
}
The only way is to set the version of pycurl with
pip install -U "pycurl == 7.19.0"
(Before this, in tower/awx, you should create venv)
--
Nathanaël Blanchet
Supervision réseau
Pôle Infrastrutures Informatiques
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5
Tél. 33 (0)4 67 54 84 55
Fax 33 (0)4 67 54 84 14
blanchet(a)abes.fr
6 years, 3 months
oVirt 4.2.8 CPU Compatibility
by Stefano Danzi
Hello!
I'm running oVirt 4.2.7.5-1.el7 on 3 hosts cluster.
Cluster CPU Type is "AMD Opteron G3".
On default cluster I can see the warning:
"Warning: The CPU type 'AMD Opteron G3' will not be supported in the
next minor version update'"
Is it still supported in version 4.2.8? I can't see references in
documentation or changelog.
6 years, 3 months
ovirt 4.2 HCI rollout
by Markus Schaufler
Hi,
I'm trying a (nested) ovirt 4.2.7 HCI rollout on 3 centos VM's by following https://ovirt.org/documentation/gluster-hyperconverged/chap-Deploying_Hyp...
gluster deployment was successful but at HE deployment "stage 5" I got following error:
[ INFO ] TASK [Reconfigure OVN central address]
[ ERROR ] fatal: [localhost]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'dict object' has no attribute 'stdout_lines'\n\nThe error appears to have been in '/usr/share/ovirt-hosted-engine-setup/ansible/create_target_vm.yml': line 522, column 5, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n # https://github.com/oVirt/ovirt-engine/blob/master/packaging/playbooks/rol... - name: Reconfigure OVN central address\n ^ here\n"}
/var/log/messages:
Jan 21 14:09:56 HCI01 journal: ovirt-ha-agent ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine ERROR Engine VM stopped on localhost
Jan 21 14:10:01 HCI01 systemd: Started Session 22 of user root.
Jan 21 14:10:02 HCI01 systemd: Started Session c306 of user root.
Jan 21 14:10:03 HCI01 systemd: Started Session c307 of user root.
Jan 21 14:10:06 HCI01 vdsm[3650]: WARN executor state: count=5 workers=set([<Worker name=periodic/4 waiting task#=141 at 0x7fd2d4316910>, <Worker name=periodic/1 running
<Task discardable <Operation action=<vdsm.virt.sampling.VMBulkstatsMonitor object at 0x7fd2d4679490> at 0x7fd2d4679710> timeout=7.5, duration=7 at 0x7fd33c1e0ed0> disca
rded task#=413 at 0x7fd2d5ed0510>, <Worker name=periodic/3 waiting task#=414 at 0x7fd2d5ed0b10>, <Worker name=periodic/5 waiting task#=0 at 0x7fd2d425f650>, <Worker name
=periodic/2 waiting task#=412 at 0x7fd2d5ed07d0>])
Jan 21 14:10:06 HCI01 kernel: ovirtmgmt: port 2(vnet0) entered disabled state
Jan 21 14:10:06 HCI01 kernel: device vnet0 left promiscuous mode
Jan 21 14:10:06 HCI01 kernel: ovirtmgmt: port 2(vnet0) entered disabled state
Jan 21 14:10:06 HCI01 NetworkManager[3666]: <info> [1548076206.9177] device (vnet0): state change: disconnected -> unmanaged (reason 'unmanaged', sys-iface-state: 'remo
ved')
Jan 21 14:10:06 HCI01 NetworkManager[3666]: <info> [1548076206.9180] device (vnet0): released from master device ovirtmgmt
Jan 21 14:10:06 HCI01 lldpad: recvfrom(Event interface): No buffer space available
Jan 21 14:10:06 HCI01 libvirtd: 2019-01-21 13:10:06.925+0000: 2651: error : qemuMonitorIORead:609 : Unable to read from monitor: Connection reset by peer
Jan 21 14:10:07 HCI01 kvm: 0 guests now active
Jan 21 14:10:07 HCI01 systemd-machined: Machine qemu-3-HostedEngine terminated.
Jan 21 14:10:07 HCI01 libvirtd: 2019-01-21 13:10:07.125+0000: 2704: warning : qemuGetProcessInfo:1406 : cannot parse process status data
Jan 21 14:10:07 HCI01 libvirtd: 2019-01-21 13:10:07.125+0000: 2704: warning : qemuGetProcessInfo:1406 : cannot parse process status data
Jan 21 14:10:07 HCI01 libvirtd: 2019-01-21 13:10:07.125+0000: 2704: warning : qemuGetProcessInfo:1406 : cannot parse process status data
Jan 21 14:10:07 HCI01 libvirtd: 2019-01-21 13:10:07.125+0000: 2704: warning : qemuGetProcessInfo:1406 : cannot parse process status data
Jan 21 14:10:07 HCI01 libvirtd: 2019-01-21 13:10:07.126+0000: 2704: error : virNetDevTapInterfaceStats:764 : internal error: /proc/net/dev: Interface not found
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -D libvirt-out -m physdev --physdev-is-bridged --physdev-out vnet0 -g FP-vnet
0' failed: iptables v1.4.21: goto 'FP-vnet0' is not a chain#012#012Try `iptables -h' or 'iptables --help' for more information.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -D libvirt-out -m physdev --physdev-out vnet0 -g FP-vnet0' failed: iptables v
1.4.21: goto 'FP-vnet0' is not a chain#012#012Try `iptables -h' or 'iptables --help' for more information.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -D libvirt-in -m physdev --physdev-in vnet0 -g FJ-vnet0' failed: iptables v1.
4.21: goto 'FJ-vnet0' is not a chain#012#012Try `iptables -h' or 'iptables --help' for more information.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -D libvirt-host-in -m physdev --physdev-in vnet0 -g HJ-vnet0' failed: iptable
s v1.4.21: goto 'HJ-vnet0' is not a chain#012#012Try `iptables -h' or 'iptables --help' for more information.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F FP-vnet0' failed: iptables: No chain/target/match by that name.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -X FP-vnet0' failed: iptables: No chain/target/match by that name.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F FJ-vnet0' failed: iptables: No chain/target/match by that name.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -X FJ-vnet0' failed: iptables: No chain/target/match by that name.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F HJ-vnet0' failed: iptables: No chain/target/match by that name.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -X HJ-vnet0' failed: iptables: No chain/target/match by that name.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-out -m physdev --physdev-is-bridged --physdev-out vnet0 -g FP-vne
t0' failed: ip6tables v1.4.21: goto 'FP-vnet0' is not a chain#012#012Try `ip6tables -h' or 'ip6tables --help' for more information.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-out -m physdev --physdev-out vnet0 -g FP-vnet0' failed: ip6tables
v1.4.21: goto 'FP-vnet0' is not a chain#012#012Try `ip6tables -h' or 'ip6tables --help' for more information.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-in -m physdev --physdev-in vnet0 -g FJ-vnet0' failed: ip6tables v
1.4.21: goto 'FJ-vnet0' is not a chain#012#012Try `ip6tables -h' or 'ip6tables --help' for more information.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-host-in -m physdev --physdev-in vnet0 -g HJ-vnet0' failed: ip6tab
les v1.4.21: goto 'HJ-vnet0' is not a chain#012#012Try `ip6tables -h' or 'ip6tables --help' for more information.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -F FP-vnet0' failed: ip6tables: No chain/target/match by that name.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -X FP-vnet0' failed: ip6tables: No chain/target/match by that name.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -F FJ-vnet0' failed: ip6tables: No chain/target/match by that name.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -X FJ-vnet0' failed: ip6tables: No chain/target/match by that name.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -F HJ-vnet0' failed: ip6tables: No chain/target/match by that name.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -X HJ-vnet0' failed: ip6tables: No chain/target/match by that name.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -D PREROUTING -i vnet0 -j libvirt-J-vnet0' failed: Illegal target name 'libvirt-J-vnet0'.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -D POSTROUTING -o vnet0 -j libvirt-P-vnet0' failed: Illegal target name 'libvirt-P-vnet0'.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -D PREROUTING -i vnet0 -j libvirt-J-vnet0' failed: Illegal targe
t name 'libvirt-J-vnet0'.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -D POSTROUTING -o vnet0 -j libvirt-P-vnet0' failed: Illegal targ
et name 'libvirt-P-vnet0'.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -L libvirt-J-vnet0' failed: Chain 'libvirt-J-vnet0' doesn't exis
t.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -L libvirt-P-vnet0' failed: Chain 'libvirt-P-vnet0' doesn't exist.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -F libvirt-J-vnet0' failed: Chain 'libvirt-J-vnet0' doesn't exist.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -X libvirt-J-vnet0' failed: Chain 'libvirt-J-vnet0' doesn't exist.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -F libvirt-P-vnet0' failed: Chain 'libvirt-P-vnet0' doesn't exist.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -X libvirt-P-vnet0' failed: Chain 'libvirt-P-vnet0' doesn't exist.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -D libvirt-out -m physdev --physdev-is-bridged --physdev-out vnet0 -g FO-vnet0' failed: iptables v1.4.21: goto 'FO-vnet0' is not a chain#012#012Try `iptables -h' or 'iptables --help' for more information.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -D libvirt-out -m physdev --physdev-out vnet0 -g FO-vnet0' failed: iptables v1.4.21: goto 'FO-vnet0' is not a chain#012#012Try `iptables -h' or 'iptables --help' for more information.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -D libvirt-in -m physdev --physdev-in vnet0 -g FI-vnet0' failed: iptables v1.4.21: goto 'FI-vnet0' is not a chain#012#012Try `iptables -h' or 'iptables --help' for more information.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -D libvirt-host-in -m physdev --physdev-in vnet0 -g HI-vnet0' failed: iptables v1.4.21: goto 'HI-vnet0' is not a chain#012#012Try `iptables -h' or 'iptables --help' for more information.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -D libvirt-in-post -m physdev --physdev-in vnet0 -j ACCEPT' failed: iptables: Bad rule (does a matching rule exist in that chain?).
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F FO-vnet0' failed: iptables: No chain/target/match by that name.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -X FO-vnet0' failed: iptables: No chain/target/match by that name.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F FI-vnet0' failed: iptables: No chain/target/match by that name.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -X FI-vnet0' failed: iptables: No chain/target/match by that name.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -F HI-vnet0' failed: iptables: No chain/target/match by that name.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -w -X HI-vnet0' failed: iptables: No chain/target/match by that name.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-out -m physdev --physdev-is-bridged --physdev-out vnet0 -g FO-vnet0' failed: ip6tables v1.4.21: goto 'FO-vnet0' is not a chain#012#012Try `ip6tables -h' or 'ip6tables --help' for more information.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-out -m physdev --physdev-out vnet0 -g FO-vnet0' failed: ip6tables v1.4.21: goto 'FO-vnet0' is not a chain#012#012Try `ip6tables -h' or 'ip6tables --help' for more information.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-in -m physdev --physdev-in vnet0 -g FI-vnet0' failed: ip6tables v1.4.21: goto 'FI-vnet0' is not a chain#012#012Try `ip6tables -h' or 'ip6tables --help' for more information.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-host-in -m physdev --physdev-in vnet0 -g HI-vnet0' failed: ip6tables v1.4.21: goto 'HI-vnet0' is not a chain#012#012Try `ip6tables -h' or 'ip6tables --help' for more information.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -D libvirt-in-post -m physdev --physdev-in vnet0 -j ACCEPT' failed: ip6tables: Bad rule (does a matching rule exist in that chain?).
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -F FO-vnet0' failed: ip6tables: No chain/target/match by that name.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -X FO-vnet0' failed: ip6tables: No chain/target/match by that name.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -F FI-vnet0' failed: ip6tables: No chain/target/match by that name.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -X FI-vnet0' failed: ip6tables: No chain/target/match by that name.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -F HI-vnet0' failed: ip6tables: No chain/target/match by that name.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ip6tables -w2 -w -X HI-vnet0' failed: ip6tables: No chain/target/match by that name.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -D POSTROUTING -o vnet0 -j libvirt-O-vnet0' failed: Illegal target name 'libvirt-O-vnet0'.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -L libvirt-O-vnet0' failed: Chain 'libvirt-O-vnet0' doesn't exist.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -F libvirt-O-vnet0' failed: Chain 'libvirt-O-vnet0' doesn't exist.
Jan 21 14:10:07 HCI01 firewalld[24040]: WARNING: COMMAND_FAILED: '/usr/sbin/ebtables --concurrent -t nat -X libvirt-O-vnet0' failed: Chain 'libvirt-O-vnet0' doesn't exist.
Jan 21 14:10:07 HCI01 vdsm[3650]: WARN File: /var/lib/libvirt/qemu/channels/ea1b312c-a462-45a9-ab75-78008bc4c9c9.ovirt-guest-agent.0 already removed
Jan 21 14:10:07 HCI01 vdsm[3650]: WARN Attempting to remove a non existing network: ovirtmgmt/ea1b312c-a462-45a9-ab75-78008bc4c9c9
Jan 21 14:10:07 HCI01 vdsm[3650]: WARN Attempting to remove a non existing net user: ovirtmgmt/ea1b312c-a462-45a9-ab75-78008bc4c9c9
Jan 21 14:10:07 HCI01 vdsm[3650]: WARN File: /var/lib/libvirt/qemu/channels/ea1b312c-a462-45a9-ab75-78008bc4c9c9.org.qemu.guest_agent.0 already removed
any ideas on that?
6 years, 3 months
Q: Is it safe to execute on node "saslpasswd2 -a libvirt username" ?
by Andrei Verovski
Hi !
Is it safe to execute on oVirt node this command ?
saslpasswd2 -a libvirt username
Its a production environment, screwing up anything is not an option.
I have no idea how VDSM interacts with libvirt, so not sure about this.
Thanks in advance
Andrei
6 years, 3 months
Host non-responsive after yum update CentOS7/Ovirt3.6
by jaherring@usa.net
Hi, I working on a CentOS7 based Ovirt 3.6 system (ovirt-engine/db on one machine, two separate ovirt vm hosts) which has been running fine but mostly ignored for 2-3 of years. Recently it was decided to update the OS as it was far behind on security updates, so one host was put into maintenance mode, yum update'd, rebooted, and then it was attempted to take out of maintenance mode but it's "non-responsive" now.
If I look in /var/log/ovirt-engine/engine.log on the engine machine I see for this host (vmserver2):
"ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesVDSCommand] (DefaultQuartzScheduler_Worker-36) [2bc0978d] Command 'GetCapabilitiesVDSCommand(HostName = vmserver2, VdsIdAndVdsVDSCommandParametersBase:{runAsync='true', hostId='6725086f-42c0-40eb-91f1-0f2411ea9432', vds='Host[vmserver2,6725086f-42c0-40eb-91f1-0f2411ea9432]'})' execution failed: org.ovirt.vdsm.jsonrpc.client.ClientConnectionException: Connection failed" and thereafter more errors. This keep repeating in the log.
In the Ovirt GUI I see multiple occurrences of log entries for the problem host:
"vmserver2...command failed: Vds timeout occurred"
"vmserver2...command failed: Heartbeat exceeded"
"vmserver2...command failed: internal error: Unknown CPU model Broadwell-noTSX-IBRS"
Firewall rules look identical to the host which is working normally but has not been updated.
Any thoughts about how to fix or further troubleshoot this?
6 years, 3 months
Re: Cannot Increase Hosted Engine VM Memory
by Douglas Duckworth
Hi Simone
Can I get help with this issue? Still cannot increase memory for Hosted Engine.
Thanks,
Douglas Duckworth, MSc, LFCS
HPC System Administrator
Scientific Computing Unit<https://scu.med.cornell.edu>
Weill Cornell Medicine
1300 York Avenue
New York, NY 10065
E: doug(a)med.cornell.edu<mailto:doug@med.cornell.edu>
O: 212-746-6305
F: 212-746-8690
On Thu, Jan 17, 2019 at 8:08 AM Douglas Duckworth <dod2014(a)med.cornell.edu<mailto:dod2014@med.cornell.edu>> wrote:
Sure, they're attached. In "first attempt" the error seems to be:
2019-01-17 07:49:24,795-05 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-29) [680f82b3-7612-4d91-afdc-43937aa298a2] EVENT_ID: FAILED_HOT_SET_MEMORY_NOT_DIVIDABLE(2,048), Failed to hot plug memory to VM HostedEngine. Amount of added memory (4000MiB) is not dividable by 256MiB.
Followed by:
2019-01-17 07:49:24,814-05 WARN [org.ovirt.engine.core.bll.UpdateRngDeviceCommand] (default task-29) [26f5f3ed] Validation of action 'UpdateRngDevice' failed for user admin@internal-authz. Reasons: ACTION_TYPE_FAILED_VM_IS_RUNNING
2019-01-17 07:49:24,815-05 ERROR [org.ovirt.engine.core.bll.UpdateVmCommand] (default task-29) [26f5f3ed] Updating RNG device of VM HostedEngine (adf14389-1563-4b1a-9af6-4b40370a825b) failed. Old RNG device = VmRngDevice:{id='VmDeviceId:{deviceId='6435b2b5-163c-4f0c-934e-7994da60dc89', vmId='adf14389-1563-4b1a-9af6-4b40370a825b'}', device='virtio', type='RNG', specParams='[source=urandom]', address='', managed='true', plugged='true', readOnly='false', deviceAlias='', customProperties='null', snapshotId='null', logicalName='null', hostDevice='null'}. New RNG device = VmRngDevice:{id='VmDeviceId:{deviceId='6435b2b5-163c-4f0c-934e-7994da60dc89', vmId='adf14389-1563-4b1a-9af6-4b40370a825b'}', device='virtio', type='RNG', specParams='[source=urandom]', address='', managed='true', plugged='true', readOnly='false', deviceAlias='', customProperties='null', snapshotId='null', logicalName='null', hostDevice='null'}.
In "second attempt" I used values that are dividable by 256 MiB so that's no longer present. Though same error:
2019-01-17 07:56:59,795-05 INFO [org.ovirt.engine.core.vdsbroker.SetAmountOfMemoryVDSCommand] (default task-22) [7059a48f] START, SetAmountOfMemoryVDSCommand(HostName = ovirt-hv1.med.cornell.edu<http://ovirt-hv1.med.cornell.edu>, Params:{hostId='cdd5ffda-95c7-4ffa-ae40-be66f1d15c30', vmId='adf14389-1563-4b1a-9af6-4b40370a825b', memoryDevice='VmDevice:{id='VmDeviceId:{deviceId='7f7d97cc-c273-4033-af53-bc9033ea3abe', vmId='adf14389-1563-4b1a-9af6-4b40370a825b'}', device='memory', type='MEMORY', specParams='[node=0, size=2048]', address='', managed='true', plugged='true', readOnly='false', deviceAlias='', customProperties='null', snapshotId='null', logicalName='null', hostDevice='null'}', minAllocatedMem='6144'}), log id: 50873daa
2019-01-17 07:56:59,855-05 INFO [org.ovirt.engine.core.vdsbroker.SetAmountOfMemoryVDSCommand] (default task-22) [7059a48f] FINISH, SetAmountOfMemoryVDSCommand, log id: 50873daa
2019-01-17 07:56:59,862-05 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-22) [7059a48f] EVENT_ID: HOT_SET_MEMORY(2,039), Hotset memory: changed the amount of memory on VM HostedEngine from 4096 to 4096
2019-01-17 07:56:59,881-05 WARN [org.ovirt.engine.core.bll.UpdateRngDeviceCommand] (default task-22) [28fd4c82] Validation of action 'UpdateRngDevice' failed for user admin@internal-authz. Reasons: ACTION_TYPE_FAILED_VM_IS_RUNNING
2019-01-17 07:56:59,882-05 ERROR [org.ovirt.engine.core.bll.UpdateVmCommand] (default task-22) [28fd4c82] Updating RNG device of VM HostedEngine (adf14389-1563-4b1a-9af6-4b40370a825b) failed. Old RNG device = VmRngDevice:{id='VmDeviceId:{deviceId='6435b2b5-163c-4f0c-934e-7994da60dc89', vmId='adf14389-1563-4b1a-9af6-4b40370a825b'}', device='virtio', type='RNG', specParams='[source=urandom]', address='', managed='true', plugged='true', readOnly='false', deviceAlias='', customProperties='null', snapshotId='null', logicalName='null', hostDevice='null'}. New RNG device = VmRngDevice:{id='VmDeviceId:{deviceId='6435b2b5-163c-4f0c-934e-7994da60dc89', vmId='adf14389-1563-4b1a-9af6-4b40370a825b'}', device='virtio', type='RNG', specParams='[source=urandom]', address='', managed='true', plugged='true', readOnly='false', deviceAlias='', customProperties='null', snapshotId='null', logicalName='null', hostDevice='null'}.
This message repeats throughout engine.log:
2019-01-17 07:55:43,270-05 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-89) [] EVENT_ID: VM_MEMORY_UNDER_GUARANTEED_VALUE(148), VM HostedEngine on host ovirt-hv1.med.cornell.edu<http://ovirt-hv1.med.cornell.edu> was guaranteed 8192 MB but currently has 4224 MB
As you can see attached the host has plenty of memory.
Thank you Simone!
Thanks,
Douglas Duckworth, MSc, LFCS
HPC System Administrator
Scientific Computing Unit<https://scu.med.cornell.edu>
Weill Cornell Medicine
1300 York Avenue
New York, NY 10065
E: doug(a)med.cornell.edu<mailto:doug@med.cornell.edu>
O: 212-746-6305
F: 212-746-8690
On Thu, Jan 17, 2019 at 5:09 AM Simone Tiraboschi <stirabos(a)redhat.com<mailto:stirabos@redhat.com>> wrote:
On Wed, Jan 16, 2019 at 8:22 PM Douglas Duckworth <dod2014(a)med.cornell.edu<mailto:dod2014@med.cornell.edu>> wrote:
Sorry for accidental send.
Anyway I try to increase physical memory however it won't go above 4096MB. The hypervisor has 64GB.
Do I need to modify this value with Hosted Engine offline?
No, it's not required.
Can you please attach your engine.log for the relevant time frame?
Thanks,
Douglas Duckworth, MSc, LFCS
HPC System Administrator
Scientific Computing Unit<https://scu.med.cornell.edu>
Weill Cornell Medicine
1300 York Avenue
New York, NY 10065
E: doug(a)med.cornell.edu<mailto:doug@med.cornell.edu>
O: 212-746-6305
F: 212-746-8690
On Wed, Jan 16, 2019 at 1:58 PM Douglas Duckworth <dod2014(a)med.cornell.edu<mailto:dod2014@med.cornell.edu>> wrote:
Hello
I am trying to increase Hosted Engine physical memory above 4GB
Thanks,
Douglas Duckworth, MSc, LFCS
HPC System Administrator
Scientific Computing Unit<https://scu.med.cornell.edu>
Weill Cornell Medicine
1300 York Avenue
New York, NY 10065
E: doug(a)med.cornell.edu<mailto:doug@med.cornell.edu>
O: 212-746-6305
F: 212-746-8690
_______________________________________________
Users mailing list -- users(a)ovirt.org<mailto:users@ovirt.org>
To unsubscribe send an email to users-leave(a)ovirt.org<mailto:users-leave@ovirt.org>
Privacy Statement: https://www.ovirt.org/site/privacy-policy/<https://urldefense.proofpoint.com/v2/url?u=https-3A__www.ovirt.org_site_p...>
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/<https://urldefense.proofpoint.com/v2/url?u=https-3A__www.ovirt.org_commun...>
List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/WGSXQVVPJJ2...<https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.ovirt.org_arch...>
6 years, 3 months
Re: Hosted Engine VM and Storage not showing up
by Simone Tiraboschi
On Mon, Jan 7, 2019 at 2:03 PM Vinícius Ferrão <ferrao(a)versatushpc.com.br>
wrote:
> Hello Simone,
>
> Sent from my iPhone
>
> On 7 Jan 2019, at 07:11, Simone Tiraboschi <stirabos(a)redhat.com> wrote:
>
>
>
> On Sun, Jan 6, 2019 at 5:31 PM <ferrao(a)versatushpc.com.br> wrote:
>
>> Hello,
>>
>> I’ve a new oVirt installation using oVirt 4.2.7.1 Node and after
>> deploying the hosted engine it does not show up on the interface even after
>> adding the first storage.
>>
>> The Datacenter is up but the engine VM and the engine storage does not
>> appear.
>>
>> I have the following message repeated constantly on /var/log/messages:
>>
>> Jan 4 20:17:30 ovirt1 journal: ovirt-ha-agent
>> ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config.vm ERROR
>> Unable to identify the OVF_STORE volume, falling back to initial vm.conf.
>> Please ensure you already added your first data domain for regular VMs
>>
>> What’s wrong? Am I doing something different?
>>
>
> The import of external VM is broken in 4.2.7 as for
> https://bugzilla.redhat.com/show_bug.cgi?id=1649615
> It will be fixed with 4.2.8.
>
> In the mean time I strongly suggest to use the regular flow for
> hosted-engine deployment (simply skip --noansible option) since only the
> vintage deprecated flow is affected by this issue.
>
>
>
> Thanks for pointing the issue. I was unable the find this on bugzilla by
> myself. The title isn’t helping either.
>
> But on other hand, I only used the legacy mode because ansible mode fails.
>
Can you please attach a log of the issue?
>
> I’m not sure why it fails. I can try it again, but I can ask in advance:
> the management network is bonded, is this an issue? I think I’ve read
> something about this on this list but I’m unsure.
>
No, but you should set bond mode 1, 2, 3, or 4.
Teaming is not supported.
>
> Thanks,
>
>
>>
>> Additional infos:
>>
>> [root@ovirt1 ~]# vdsm-tool list-nets
>> ovirtmgmt (default route)
>> storage
>>
>> [root@ovirt1 ~]# ip a | grep "inet "
>> inet 127.0.0.1/8 scope host lo
>> inet 10.20.0.101/24 brd 10.20.0.255 scope global dynamic ovirtmgmt
>> inet 192.168.10.1/29 brd 192.168.10.7 scope global storage
>>
>> [root@ovirt1 ~]# mount | grep -i nfs
>> sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime)
>> 10.20.0.200:/mnt/pool0/ovirt/he on /rhev/data-center/mnt/10.20.0.200:_mnt_pool0_ovirt_he
>> type nfs4
>> (rw,relatime,vers=4.1,rsize=131072,wsize=131072,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,clientaddr=10.20.0.101,local_lock=none,addr=10.20.0.200)
>>
>> [root@ovirt1 ~]# hosted-engine --check-deployed
>> Returns nothing!
>>
>> [root@ovirt1 ~]# hosted-engine --check-liveliness
>> Hosted Engine is up!
>>
>> [root@ovirt1 ~]# hosted-engine --vm-status
>>
>> --== Host 1 status ==--
>>
>> conf_on_shared_storage : True
>> Status up-to-date : True
>> Hostname : ovirt1.local.versatushpc.com.br
>> Host ID : 1
>> Engine status : {"health": "good", "vm": "up",
>> "detail": "Up"}
>> Score : 3400
>> stopped : False
>> Local maintenance : False
>> crc32 : 1736a87d
>> local_conf_timestamp : 7836
>> Host timestamp : 7836
>> Extra metadata (valid at timestamp):
>> metadata_parse_version=1
>> metadata_feature_version=1
>> timestamp=7836 (Fri Jan 4 20:18:10 2019)
>> host-id=1
>> score=3400
>> vm_conf_refresh_time=7836 (Fri Jan 4 20:18:10 2019)
>> conf_on_shared_storage=True
>> maintenance=False
>> state=EngineUp
>> stopped=False
>>
>>
>> Thanks in advance,
>>
>> PS: Log files are available here:
>> http://www.if.ufrj.br/~ferrao/ovirt/issues/he-not-showing/
>> _______________________________________________
>> Users mailing list -- users(a)ovirt.org
>> To unsubscribe send an email to users-leave(a)ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/IQHM6YQ7HVB...
>>
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/BPJAV4AVRN5...
>
>
6 years, 3 months
Migrating Hosted Engine environment from untagged to LACP bonded interfaces with tagged ovirtmgmt network
by Bernhard Dick
Hi,
I have an oVirt 4.2 environment running with the engine in hosted engine
state. We are now going to change our whole setup migrating to
redundancy on switch side and using a different VLAN for the ovirt
management traffic.
Currently all host and management-traffic is running untagged on one
network interface on all of our hosts and we want to change this to be
VLAN-tagged inside LACP-Bonds (each bond containing two network
interfaces) on all hosts. While changing the configuration for the VM
networks should be straight forward as I can shutdown all VMs during the
migration I'm asking how to handle this for the hosted engine VM and
configuration. Is there any information how to do such a change?
Regards
Bernhard
6 years, 3 months
Adding host with local-storage VMs configured
by Callum Smith
Dear All,
We’re in a world where we had to recover from outage by installing a clean hosted engine and importing VMs from storage. We do, however, have a host that is configured for local-storage and has a VM running on it. What is the best way to re-introduce this host to a new hosted-engine with minimal outage of the VM on that host.
Regards,
Callum
--
Callum Smith
Research Computing Core
Wellcome Trust Centre for Human Genetics
University of Oxford
e. callum(a)well.ox.ac.uk<mailto:callum@well.ox.ac.uk>
6 years, 3 months
Q: "virsh list" on oVirt node
by Andrei Verovski
Hi !
“virsh” can’t be executed on oVirt node directly due to authentication problem.
Only "virsh —readonly” are working.
Someone have recommended adding new password with this:
saslpasswd2 -a libvirt username
I know this is somewhat depreciated to hacking directly into oVirt node yet I’m still need this.
What is correct, non-destructive to accomplish this?
Thanks in advance.
Andrei
6 years, 3 months
Re: The built in group Everyone is troublesome.
by Jacob Green
Thank you for your help! This worked flawlessly and helped me
understand the engine database a little more!
On 12/04/2018 12:00 PM, Staniforth, Paul wrote:
>
> Get the id for the everyone group
> https://engine.example.com/ovirt-engine/api/groups?search=everyone
>
> Get the id for the UserRole
> https://engine.example.com/ovirt-engine/api/roles
>
> connect to the engine database
>
> e.g.
>
> psql -h localhost -U engine -d engine
>
> select * from permissions where ad_element_id='groupid';
>
> note the id of the permission, probably the last one but you can check
> by the role_id
> then delete the permission.
>
> delete from permissions where id='noted before';
>
> you should make a backup of your system before you do this.
>
>
> Regards,
>
> Paul S.
>
> ------------------------------------------------------------------------
> *From:* Staniforth, Paul
> *Sent:* 04 December 2018 17:23
> *To:* Jacob Green
> *Subject:* Re: [ovirt-users] The built in group Everyone is troublesome.
>
> Yes, that's not good you need to remove the UserRole system permission
> but they fixed it so you can't.
>
> https://bugzilla.redhat.com/show_bug.cgi?id=1366205
>
>
> I think there maybe a bug that allows you to add system permissions to
> the everyone group in 4.2, you're only supposed to be able to change
> the permissions with a dbscript.
>
>
> I'll look up my notes on how to remove the permission from the DB.
>
>
> Regards,
>
> Paul S.
>
>
> ------------------------------------------------------------------------
> *From:* Jacob Green <jgreen(a)aasteel.com>
> *Sent:* 04 December 2018 16:59
> *To:* Staniforth, Paul
> *Subject:* Re: [ovirt-users] The built in group Everyone is troublesome.
>
>
> If the picture does not come through. The following are the permisstions
>
> Group > Everyone
>
> Everyone > Role - UserRole,UserProfileEditor Object : (System)
>
>
> On 12/04/2018 10:20 AM, Staniforth, Paul wrote:
>> What are the permissions for the group everyone, in particular the system permission should be just UserProfileEditor.
>>
>> Regards,
>> Paul S.
>> ________________________________________
>> From: Jacob Green<jgreen(a)aasteel.com>
>> Sent: 04 December 2018 15:20
>> To: users
>> Subject: [ovirt-users] The built in group Everyone is troublesome.
>>
>> So all my VMs are inheriting system permissions from group
>> everyone and giving all my users access to all my VMs, in ovirt 4.2. Is
>> there a best practices guide or any recommendation on how to clear this
>> up? Clicking remove on everyone does not work because Ovirt won't allow
>> me to remove a built in account.
>>
>>
>> Thank you
>>
>> --
>> Jacob Green
>>
>> Systems Admin
>>
>> American Alloy Steel
>>
>> 713-300-5690
>> _______________________________________________
>> Users mailing list --users(a)ovirt.org
>> To unsubscribe send an email tousers-leave(a)ovirt.org
>> Privacy Statement:https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:https://lists.ovirt.org/archives/list/users@ovirt.org/message/A5...
>> To view the terms under which this email is distributed, please go to:-
>> http://leedsbeckett.ac.uk/disclaimer/email/
>
> --
> Jacob Green
>
> Systems Admin
>
> American Alloy Steel
>
> 713-300-5690
> To view the terms under which this email is distributed, please go to:-
> http://leedsbeckett.ac.uk/disclaimer/email/
--
Jacob Green
Systems Admin
American Alloy Steel
713-300-5690
6 years, 4 months
oVirt 4.2.7 export VM seems stuck at end of sparse disk
by Gianluca Cecchi
Hello,
it happened two times, always with VMs composed by more than 1 disk.
Now I have a VM with 9 disks and a total of about 370Gb:
1 x 90Gb
5 x 50Gb
3 x 10Gb
I export vm to an export domain and it started 2 hours and 10 minutes ago
at 15:46.
At beginning the write rate on export domain was 120MB/s (in line with I/O
capabilities of storage subsystem).
It seems 5 disks completed ok, while 4 don't complete, even if it seems to
be some reading activity:
with command
iotop -d 3 -k -o -P
I get this on hypervisor where qemu-img convert command is executing
Total DISK READ : 5712.75 K/s | Total DISK WRITE : 746.58 K/s
Actual DISK READ: 6537.11 K/s | Actual DISK WRITE: 6.89 K/s
PID PRIO USER DISK READ> DISK WRITE SWAPIN IO COMMAND
21238 idle vdsm 1344.18 K/s 183.12 K/s 0.00 % 0.15 % qemu-img
convert -p -t non~1d23-4829-8350-f81fa16ea8b0
21454 idle vdsm 1344.18 K/s 183.12 K/s 0.00 % 0.06 % qemu-img
convert -p -t non~443d-4563-8879-3dbd711b6936
21455 idle vdsm 1344.18 K/s 189.68 K/s 0.00 % 0.08 % qemu-img
convert -p -t non~b67d-4acd-9bb5-485bc3ffdf2e
21548 idle vdsm 1344.18 K/s 190.67 K/s 0.00 % 0.06 % qemu-img
convert -p -t non~7fc
On export share
[root@xfer ~]# ll -td
/export/ovirt/a6a289ea-f160-4d35-b3aa-e59d171b4633/images/*
drwxr-xr-x 2 36 36 4096 Jan 21 17:21
/export/ovirt/a6a289ea-f160-4d35-b3aa-e59d171b4633/images/ac33a8fe-f10e-4cbb-b121-2956f9925ade
drwxr-xr-x 2 36 36 4096 Jan 21 17:06
/export/ovirt/a6a289ea-f160-4d35-b3aa-e59d171b4633/images/20c07082-d5ed-4804-867d-1f3f7c202b0e
drwxr-xr-x 2 36 36 4096 Jan 21 17:02
/export/ovirt/a6a289ea-f160-4d35-b3aa-e59d171b4633/images/f8b1c6bc-de13-4416-9fe6-d7c26fb082b1
drwxr-xr-x 2 36 36 4096 Jan 21 17:02
/export/ovirt/a6a289ea-f160-4d35-b3aa-e59d171b4633/images/1b5313ef-0ee5-4f9a-b2a6-baa153ff0984
drwxr-xr-x 2 36 36 4096 Jan 21 17:02
/export/ovirt/a6a289ea-f160-4d35-b3aa-e59d171b4633/images/3f21c566-bc00-4dab-acbb-db9a3a1d76fa
drwxr-xr-x 2 36 36 4096 Jan 21 15:47
/export/ovirt/a6a289ea-f160-4d35-b3aa-e59d171b4633/images/ed3a721e-fd1b-4fea-ad41-7ca56e94890e
drwxr-xr-x 2 36 36 4096 Jan 21 15:46
/export/ovirt/a6a289ea-f160-4d35-b3aa-e59d171b4633/images/213f9ce9-6c34-4911-820e-4d1a96ba1791
drwxr-xr-x 2 36 36 4096 Jan 21 15:46
/export/ovirt/a6a289ea-f160-4d35-b3aa-e59d171b4633/images/28fef57c-4254-4469-9f3f-0d1f114d4f78
drwxr-xr-x 2 36 36 4096 Jan 21 15:46
/export/ovirt/a6a289ea-f160-4d35-b3aa-e59d171b4633/images/ea4fa5b1-a93a-4cb9-9552-e8ec67c5ff75
[root@xfer ~]# du -sh
/export/ovirt/a6a289ea-f160-4d35-b3aa-e59d171b4633/images/*
11G
/export/ovirt/a6a289ea-f160-4d35-b3aa-e59d171b4633/images/1b5313ef-0ee5-4f9a-b2a6-baa153ff0984
51G
/export/ovirt/a6a289ea-f160-4d35-b3aa-e59d171b4633/images/20c07082-d5ed-4804-867d-1f3f7c202b0e
51G
/export/ovirt/a6a289ea-f160-4d35-b3aa-e59d171b4633/images/213f9ce9-6c34-4911-820e-4d1a96ba1791
51G
/export/ovirt/a6a289ea-f160-4d35-b3aa-e59d171b4633/images/28fef57c-4254-4469-9f3f-0d1f114d4f78
11G
/export/ovirt/a6a289ea-f160-4d35-b3aa-e59d171b4633/images/3f21c566-bc00-4dab-acbb-db9a3a1d76fa
85G
/export/ovirt/a6a289ea-f160-4d35-b3aa-e59d171b4633/images/ac33a8fe-f10e-4cbb-b121-2956f9925ade
51G
/export/ovirt/a6a289ea-f160-4d35-b3aa-e59d171b4633/images/ea4fa5b1-a93a-4cb9-9552-e8ec67c5ff75
51G
/export/ovirt/a6a289ea-f160-4d35-b3aa-e59d171b4633/images/ed3a721e-fd1b-4fea-ad41-7ca56e94890e
11G
/export/ovirt/a6a289ea-f160-4d35-b3aa-e59d171b4633/images/f8b1c6bc-de13-4416-9fe6-d7c26fb082b1
[root@xfer ~]#
It seems more than 30 minutes no more progress....
I see no errors in vdsm.log of hypervisor and in engine.log I keep this
message every 10 seconds:
2019-01-21 18:08:19,060+01 INFO
[org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback]
(EE-ManagedThreadFactory-engineScheduled-Thread-10)
[a1c0e39f-1924-4ae8-8b48-30bcce669278] Command 'ExportVm' (id:
'dc0a58bd-e8ca-42a9-8de0-dd3ccabe0512') waiting on child command id:
'29d40db8-a0b9-4408-9e62-1abf3bccca65' type:'CopyImageGroup' to complete
All the disks except the "apparently" not completed one (the 90Gb disk) are
preallocated.
Any other hint on what to check?
Thanks,
Gianluca
6 years, 4 months
How to connect 2 hosts
by adamantini.peratikou@ouc.ac.cy
I have the following configuraton:
1 cluster, 2 Hosts
I would like to use a Vm-router under Host 1 to act as a dhcp server for both HOST1 and HOST2 VMs is it possible?
6 years, 4 months
User Management
by Sakhi Hadebe
Hi,
I need some pointers to documentation that will help me to configure users
to have access and some rights (access to console, starting and shutting
down own VMs) on their VMs on the VM portal.
I have created a user using the ovirt-aaa-jdbc-tool utility, but I am
unable to login to the cockpit portal. It gives an error:
*The user shadebe@internal is not **authorized** to perform login*
User details:
[root@hostedengine ~]# ovirt-aaa-jdbc-tool query --what=user
--pattern="name=s*"
-- User shadebe(a35e8e14-d32b-4ff9-89e0-fd090a87146a) --
Namespace: *
Name: shadebe
ID: a35e8e14-d32b-4ff9-89e0-fd090a87146a
Display Name:
Email: sakhi(a)sanren.ac.za
First Name: Sakhi
Last Name: Hadebe
Department:
Title:
Description:
Account Disabled: false
Account Locked: false
Account Unlocked At: 1970-01-01 00:00:00Z
Account Valid From: 2019-01-16 09:08:48Z
Account Valid To: 2219-01-16 09:08:48Z
Account Without Password: false
Last successful Login At: 2019-01-16 09:32:52Z
Last unsuccessful Login At: 2019-01-16 09:32:36Z
Password Valid To: 2029-01-16 10:30:00Z
Please help.
--
Regards,
Sakhi Hadebe
Engineer: South African National Research Network (SANReN)Competency
Area, Meraka, CSIR
Tel: +27 12 841 2308 <+27128414213>
Fax: +27 12 841 4223 <+27128414223>
Cell: +27 71 331 9622 <+27823034657>
Email: sakhi(a)sanren.ac.za <shadebe(a)csir.co.za>
6 years, 4 months
Unable to start or create a virtual machine
by Shikhar Verma
Hi,
I have created the virtual machine from ovirt manager but when I am trying to get the console of the vm to do the installation of it but it is only giving two two line and even i have tried as run once and selected CD ROM as first priority and attached the ISO of centos7 SeaBIOS (version 1.11.0-2.e17) Machine UUID ------- Also, from manager, newly launched vm is showing green.. And from the host machine, it is showing this error Jan 21 19:23:24 servera libvirtd: 2019-01-21 13:53:24.286+0000: 12800: error : qemuDomainAgentAvailable:9133 : Guest agent is not responding: QEMU guest agent is not connected I am using the latest version version of ovirt-engine & host as well.
Setup I have create on my Laptop using vmware workstation:
Centos server : Manager : server.example.com (192.168.28.128)
Centos : Host : servera.example.com 192.168.28.130
Centos : NFS Server : utility.example.com 192.168.28.129 (create fs's as /exports and exported as /exports/iso & /exports/data1 )
In /exports/iso ==> I have rhel 7 & centos 7 iso images and /exports/data1 ( will be using for disk images of vms)
From Manager : I have added host and also create domains one as datadomain (/exports/data1) and another one is isodomain (/exports/iso)
Now issue i am facing when I am trying to create a vm, it is allowing me to create vm but console is blank.
6 years, 4 months
Vdo based storage domain
by Leo David
Hi everyone,
I have read and tried to understand how vdo works, and it seems pretty
impressive, but i feel like i am missing something.
At the end of the day, my question is:
Can I entirelly rely on the 10 times increased usable space ?
As and example, I have created a 4.8tb volume based on 3x480gb samsung
sm863a enterprise ssd devices. (full replica 3).
Should I be confident that I can use 4.8tb for my storage domain (vms data
store) without concerning about data repetabilitty in the future ?
I am just trying to think as an end-user consumer that cannot predict if
data that will be written to the device will be compresable or not, and
only see 4.8tb as trully usable space.
Any thoughts, any experience to share ?
Im sorry if my question sounds noob, but i'm just trying to get under end
user's hat.
Thank you very much,
Have a good day !
Leo
6 years, 4 months
failed to attach a samba active directory
by adam_xu@adagene.com.cn
Hello ovirt developers. I have attached a samba ad dc successfully more then 10 times before via this link
https://www.ovirt.org/documentation/admin-guide/chap-Users_and_Roles.html
but this time, when I try to attach a samba ad dc, I got some certificate error.
my test engine version: 4.2.7 or 4.2.8 (both have been tested)
full procedure like below:
# ovirt-engine-extension-aaa-ldap-setup
[ INFO ] Stage: Initializing
[ INFO ] Stage: Environment setup
Configuration files: ['/etc/ovirt-engine-extension-aaa-ldap-setup.conf.d/10-packaging.conf']
Log file: /tmp/ovirt-engine-extension-aaa-ldap-setup-20190122102450-0b1sjq.log
Version: otopi-1.7.8 (otopi-1.7.8-1.el7)
[ INFO ] Stage: Environment packages setup
[ INFO ] Stage: Programs detection
[ INFO ] Stage: Environment customization
Welcome to LDAP extension configuration program
Available LDAP implementations:
1 - 389ds
2 - 389ds RFC-2307 Schema
3 - Active Directory
4 - IBM Security Directory Server
5 - IBM Security Directory Server RFC-2307 Schema
6 - IPA
7 - Novell eDirectory RFC-2307 Schema
8 - OpenLDAP RFC-2307 Schema
9 - OpenLDAP Standard Schema
10 - Oracle Unified Directory RFC-2307 Schema
11 - RFC-2307 Schema (Generic)
12 - RHDS
13 - RHDS RFC-2307 Schema
14 - iPlanet
Please select: 3
Please enter Active Directory Forest name: ntbaobei.com
[ INFO ] Resolving Global Catalog SRV record for ntbaobei.com
NOTE:
It is highly recommended to use secure protocol to access the LDAP server.
Protocol startTLS is the standard recommended method to do so.
Only in cases in which the startTLS is not supported, fallback to non standard ldaps protocol.
Use plain for test environments only.
Please select protocol to use (startTLS, ldaps, plain) [startTLS]:
Please select method to obtain PEM encoded CA certificate (File, URL, Inline, System, Insecure): File
File path: /etc/pki/ca-trust/extracted/pem/ca.pem
[ INFO ] Resolving SRV record 'ntbaobei.com'
[ INFO ] Connecting to LDAP using 'ldap://dc1.ntbaobei.com:389'
[ INFO ] Executing startTLS
[ INFO ] Connection succeeded
Enter search user DN (for example uid=username,dc=example,dc=com or leave empty for anonymous): vmail(a)ntbaobei.com
Enter search user password:
[ INFO ] Attempting to bind using 'vmail(a)ntbaobei.com'
Are you going to use Single Sign-On for Virtual Machines (Yes, No) [Yes]:
NOTE:
Profile name has to match domain name, otherwise Single Sign-On for Virtual Machines will not work.
Please specify profile name that will be visible to users [ntbaobei.com]:
[ INFO ] Stage: Setup validation
NOTE:
It is highly recommended to test drive the configuration before applying it into engine.
Login sequence is executed automatically, but it is recommended to also execute Search sequence manually after successful Login sequence.
Please provide credentials to test login flow:
Enter user name: xingya_xu
Enter user password:
[ INFO ] Executing login sequence...
Login output:
2019-01-22 10:26:06,777+08 INFO ========================================================================
2019-01-22 10:26:06,800+08 INFO ============================ Initialization ============================
2019-01-22 10:26:06,801+08 INFO ========================================================================
2019-01-22 10:26:06,841+08 INFO Loading extension 'ntbaobei.com-authn'
2019-01-22 10:26:06,911+08 INFO Extension 'ntbaobei.com-authn' loaded
2019-01-22 10:26:06,917+08 INFO Loading extension 'ntbaobei.com'
2019-01-22 10:26:06,953+08 INFO Extension 'ntbaobei.com' loaded
2019-01-22 10:26:06,954+08 INFO Initializing extension 'ntbaobei.com-authn'
2019-01-22 10:26:06,960+08 INFO [ovirt-engine-extension-aaa-ldap.authn::ntbaobei.com-authn] Creating LDAP pool 'authz'
2019-01-22 10:26:07,324+08 WARNING Exception: The connection reader was unable to successfully complete TLS negotiation: SSLHandshakeException(sun.security.validator.ValidatorException: No trusted certificate found), ldapSDKVersion=4.0.5, revision=b28fb50058dfe2864171df2448ad2ad2b4c2ad58
2019-01-22 10:26:07,325+08 INFO [ovirt-engine-extension-aaa-ldap.authn::ntbaobei.com-authn] Creating LDAP pool 'authn'
2019-01-22 10:26:08,748+08 WARNING Exception: The connection reader was unable to successfully complete TLS negotiation: SSLHandshakeException(sun.security.validator.ValidatorException: No trusted certificate found), ldapSDKVersion=4.0.5, revision=b28fb50058dfe2864171df2448ad2ad2b4c2ad58
2019-01-22 10:26:08,773+08 WARNING Ignoring records from pool: 'authz'
2019-01-22 10:26:08,774+08 WARNING Ignoring records from pool: 'authz'
2019-01-22 10:26:08,775+08 INFO Extension 'ntbaobei.com-authn' initialized
2019-01-22 10:26:08,776+08 INFO Initializing extension 'ntbaobei.com'
2019-01-22 10:26:08,776+08 INFO [ovirt-engine-extension-aaa-ldap.authz::ntbaobei.com] Creating LDAP pool 'authz'
2019-01-22 10:26:09,414+08 INFO [ovirt-engine-extension-aaa-ldap.authz::ntbaobei.com] LDAP pool 'authz' information: vendor='Samba Team (https://www.samba.org)' version='4.8.8-SerNet-RedHat-18.el7'
2019-01-22 10:26:09,415+08 INFO [ovirt-engine-extension-aaa-ldap.authz::ntbaobei.com] Creating LDAP pool 'gc'
2019-01-22 10:26:10,148+08 WARNING Exception: The connection reader was unable to successfully complete TLS negotiation: SSLHandshakeException(sun.security.validator.ValidatorException: No trusted certificate found), ldapSDKVersion=4.0.5, revision=b28fb50058dfe2864171df2448ad2ad2b4c2ad58
2019-01-22 10:26:10,179+08 INFO [ovirt-engine-extension-aaa-ldap.authz::ntbaobei.com] Creating LDAP pool 'authz(a)ntbaobei.com'
2019-01-22 10:26:10,269+08 WARNING Exception: The connection reader was unable to successfully complete TLS negotiation: SSLHandshakeException(sun.security.validator.ValidatorException: No trusted certificate found), ldapSDKVersion=4.0.5, revision=b28fb50058dfe2864171df2448ad2ad2b4c2ad58
2019-01-22 10:26:10,270+08 WARNING Ignoring records from pool: 'authz(a)ntbaobei.com'
2019-01-22 10:26:10,282+08 WARNING Ignoring records from pool: 'authz(a)ntbaobei.com'
2019-01-22 10:26:10,283+08 INFO [ovirt-engine-extension-aaa-ldap.authz::ntbaobei.com] Available Namespaces: []
2019-01-22 10:26:10,283+08 INFO Extension 'ntbaobei.com' initialized
2019-01-22 10:26:10,284+08 INFO Start of enabled extensions list
2019-01-22 10:26:10,284+08 INFO Instance name: 'ntbaobei.com', Extension name: 'ovirt-engine-extension-aaa-ldap.authz', Version: '1.3.8', Notes: 'Display name: ovirt-engine-extension-aaa-ldap-1.3.8-1.el7', License: 'ASL 2.0', Home: 'http://www.ovirt.org', Author 'The oVirt Project', Build interface Version: '0', File: '/tmp/tmp01ChKj/extensions.d/ntbaobei.com.properties', Initialized: 'true'
2019-01-22 10:26:10,285+08 INFO Instance name: 'ntbaobei.com-authn', Extension name: 'ovirt-engine-extension-aaa-ldap.authn', Version: '1.3.8', Notes: 'Display name: ovirt-engine-extension-aaa-ldap-1.3.8-1.el7', License: 'ASL 2.0', Home: 'http://www.ovirt.org', Author 'The oVirt Project', Build interface Version: '0', File: '/tmp/tmp01ChKj/extensions.d/ntbaobei.com-authn.properties', Initialized: 'true'
2019-01-22 10:26:10,285+08 INFO End of enabled extensions list
2019-01-22 10:26:10,285+08 INFO ========================================================================
2019-01-22 10:26:10,285+08 INFO ============================== Execution ===============================
2019-01-22 10:26:10,286+08 INFO ========================================================================
2019-01-22 10:26:10,286+08 INFO Iteration: 0
2019-01-22 10:26:10,287+08 INFO Profile='ntbaobei.com' authn='ntbaobei.com-authn' authz='ntbaobei.com' mapping='null'
2019-01-22 10:26:10,287+08 INFO API: -->Authn.InvokeCommands.AUTHENTICATE_CREDENTIALS profile='ntbaobei.com' user='testusernamehere'
2019-01-22 10:26:10,290+08 INFO API: <--Authn.InvokeCommands.AUTHENTICATE_CREDENTIALS profile='ntbaobei.com' result=GENERAL_ERROR
2019-01-22 10:26:10,296+08 SEVERE Authn.Result code is: GENERAL_ERROR
[ ERROR ] Login sequence failed
Please investigate details of the failure (search for lines containing SEVERE log level).
Select test sequence to execute (Done, Abort, Login, Search) [Abort]:
[ ERROR ] Failed to execute stage 'Setup validation': Aborted by user
[ INFO ] Stage: Clean up
Log file is available at /tmp/ovirt-engine-extension-aaa-ldap-setup-20190122102450-0b1sjq.log:
[ INFO ] Stage: Pre-termination
[ INFO ] Stage: Termination
I have copy the CA certificate file from samba ad dc to /etc/pki/ca-trust/extracted/pem/ca.pem in the engine server.
I think the warning "SSLHandshakeException(sun.security.validator.ValidatorException: No trusted certificate found), " is the reason why I can not pass user login test?
I once passed the user login test in some older version such as 4.2.6.
I can still pass the user login test using startTLS Insecure way.
full log has been attached to this mail. thanks.
yours Adam
6 years, 4 months
Re: oVirt Node install - kickstart postintall
by Ryan Barry
The quick answer is that operations in %post must be performed before
'nodectl init'. If that's done, they'll happily stick.
On Mon, Jan 14, 2019, 9:29 PM Brad Riemann <Brad.Riemann(a)cloud5.com wrote:
> Can you send over the whole %post segment i via pastebin? I think i've run
> across something similar that i addressed, but just want to be sure.
>
>
> Brad Riemann
> Sr. System Architect
> Cloud5 Communications
>
> ------------------------------
> *From:* jeanbaptiste(a)nfrance.com <jeanbaptiste(a)nfrance.com>
> *Sent:* Friday, January 11, 2019 1:51:14 AM
> *To:* users(a)ovirt.org
> *Subject:* [ovirt-users] oVirt Node install - kickstart postintall
>
> CAUTION: This email originated from outside of CLOUD5. Do not click links
> or open attachments unless you recognize the sender and know the content is
> safe.
>
>
> Hello everybody,
>
> Since days, I'm trying to install oVirt (via Foreman) in network mode
> (TFTP net install).
> All is great, but I want to make some actions in postinstall (%post).
> Some actions are relatated to /etc/sysconfig/network-interfaces and
> another action is related to root authorized_keys.
>
> When I try to add pub-key to a created authorized_keys for root, I work
> (verified into anaconda.
> But after installation and anaconda reboot, I've noticed all my %post
> actions in /(root) are discared. After reboot, there is nothing in
> /root/.ssh for example.
> Whereas, in /etc, all my modications are preserved.
>
> I thought to a Selinux relative issue, but It is not relative to Selinux
>
> I miss something. Please can you help to understand how oVirt install /
> partition work ?
>
> thanks for all
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/BCQEKHCPSFD...
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/TGT7K3N245S...
>
6 years, 4 months
multiple engines (active passive)
by maoz zadok
Hello users,
After a painful experience with a crashed engine and a problematic recovery
from a backup, I was thinking to create one more engine in standby mode in
the same network, that will automatically recover backup files from the
active engine on a daily/hourly basis.
This way I'll run recovery tests every day automatically and easily switch
in case of engine failure with a DNS record change to the passive engine.
So the main question, is it harmful to have more then one engine alive if
only one is been modified? can the standby engine affect the host nodes?
6 years, 4 months
How to import from another oVirt / RHV environment
by Gianluca Cecchi
Hello,
I have two different oVirt 4.2 environments and I want to migrate some big
VMs from one to another.
I'm not able to detach and attach the block based domain where are the
disks of source.
And I cannot use export domain functionality.
But the 2 environments, both hosts and engines, are able to communicate.
Can I use in some way the import Virtual Machine functionality with the
"KVM(via Libvirt)" source type?
Or is there (eventually via command line) a way to use virt-v2v in this
case?
Thanks,
Gianluca
6 years, 4 months
GlusterFS and oVirt
by Magnus Isaksson
Hello
I have quite some trouble getting Gluster to work.
I have 4 nodes running CentOS and oVirt. These 4 nodes is split up in 2 clusters.
I do not run Gluster via oVirt, i run it stand alone to be able to use all 4 nodes into one gluster volume.
I can add all peers successfully, and i can create an volume and start it with sucess, but after that it starts to getting troublesome.
If i run gluster volume status after starting the volume it times out, i have read that ping-time needs to bee more than 0 so i set it on 30, Still the same problem.
From now on, i can not stop a volume nor remove it, i have to stop glusterd and remove it from /var/lib/gluster/vols/* on all nodes to be able to do anything with gluster.
From time to time when i do a gluster peer status it shows "disconnected" and when i run it again directly after it show "connected"
I get a lot of these errors in glusterd.log
[2019-01-20 12:53:46.087848] W [rpc-clnt-ping.c:223:rpc_clnt_ping_cbk] 0-management: socket disconnected
[2019-01-20 12:53:46.087858] W [rpc-clnt-ping.c:246:rpc_clnt_ping_cbk] 0-management: RPC_CLNT_PING notify failed
[2019-01-20 12:53:55.091598] W [rpc-clnt-ping.c:246:rpc_clnt_ping_cbk] 0-management: RPC_CLNT_PING notify failed
[2019-01-20 12:53:56.094846] W [rpc-clnt-ping.c:246:rpc_clnt_ping_cbk] 0-management: RPC_CLNT_PING notify failed
[2019-01-20 12:53:56.097482] W [rpc-clnt-ping.c:246:rpc_clnt_ping_cbk] 0-management: RPC_CLNT_PING notify failed
What am i doing wrong?
//Magnus
6 years, 4 months
HyperConverged Self-Hosted deployment fails
by hunter86_bg@yahoo.com
Hello Community,
recently I managed somehow to deploy a 2 node cluster on GlusterFS , but after a serious engine failiure - I have decided to start from scratch.
What I have done so far:
1. Inctall CentOS7 from scratch
2. Add ovirt repositories, vdo,cockpit for ovirt
3. Deployed the gluster cluster using cockpit
4. Trying to deploy the hosted-engine , which has failed several times.
Up to now I have detected that ovirt-ha-agent is giving:
яну 19 13:54:57 ovirt1.localdomain ovirt-ha-agent[16992]: ovirt-ha-agent ovirt_hosted_engine_ha.agent.agent.Agent ERROR Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/agent.py", line 131, in _run_agent
return action(he)
File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/agent.py", line 55, in action_proper
return he.start_monitoring()
File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/hosted_engine.py", line 413, in start_monitoring
self._initialize_broker()
File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/hosted_engine.py", line 535, in _initialize_broker
m.get('options', {}))
File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py", line 83, in start_monitor
.format(type, options, e ))
RequestError: Failed to start monitor ping, options {'addr': '192.168.1.1'}: [Errno 2] No such file or directory
According to https://access.redhat.com/solutions/3353391 , the /etc/ovirt-hosted-engine/hosted-engine.conf should be empty , but it's OK:
[root@ovirt1 tmp]# cat /etc/ovirt-hosted-engine/hosted-engine.conf
fqdn=engine.localdomain
vm_disk_id=bb0a9839-a05d-4d0a-998c-74da539a9574
vm_disk_vol_id=c1fc3c59-bc6e-4b74-a624-557a1a62a34f
vmid=d0e695da-ec1a-4d6f-b094-44a8cac5f5cd
storage=ovirt1.localdomain:/engine
nfs_version=
mnt_options=backup-volfile-servers=ovirt2.localdomain:ovirt3.localdomain
conf=/var/run/ovirt-hosted-engine-ha/vm.conf
host_id=1
console=vnc
domainType=glusterfs
spUUID=00000000-0000-0000-0000-000000000000
sdUUID=444e524e-9008-48f8-b842-1ce7b95bf248
connectionUUID=e29cf818-5ee5-46e1-85c1-8aeefa33e95d
ca_cert=/etc/pki/vdsm/libvirt-spice/ca-cert.pem
ca_subject="C=EN, L=Test, O=Test, CN=Test"
vdsm_use_ssl=true
gateway=192.168.1.1
bridge=ovirtmgmt
metadata_volume_UUID=a3be2390-017f-485b-8f42-716fb6094692
metadata_image_UUID=368fb8dc-6049-4ef0-8cf8-9d3c4d772d59
lockspace_volume_UUID=41762f85-5d00-488f-bcd0-3de49ec39e8b
lockspace_image_UUID=de100b9b-07ac-4986-9d86-603475572510
conf_volume_UUID=4306f6d6-7fe9-499d-81a5-6b354e8ecb79
conf_image_UUID=d090dd3f-fc62-442a-9710-29eeb56b0019
# The following are used only for iSCSI storage
iqn=
portal=
user=
password=
port=
Ovirt-ha-agent version is:
ovirt-hosted-engine-ha-2.2.18-1.el7.noarch
Can you guide me in order to resolve this issue and to deploy the self-hosted engine ?
Where should I start from ?
6 years, 4 months
oVirt 4.3 rc2: export ova and attempt to import to itself via script?
by Gianluca Cecchi
Hello,
I verified that in 4.3 rc2 I can now export as ova a running VM:
I have a CentOS Atomic 7 VM and when I export as ova, a snapshot is
executed and then the ova file seems directly generated, bypassing the
previous copy on storage domain:
[root@hcinode1 ]# ll /export/
total 1141632
-rw-------. 1 root root 1401305088 Jan 18 11:10 c7atomic1.ova.tmp
[root@hcinode1 ]# ll /export/
total 1356700
-rw-------. 1 root root 1401305088 Jan 18 11:10 c7atomic1.ova
[root@hcinode1 ]#
And at the end the snaphsot has been correctly removed.
events:
Vm c7atomic1 was exported successfully as a Virtual Appliance to path
/export/c7atomic1.ova on Host hcinode1 1/18/19 11:10:23 AM
Starting to export Vm c7atomic1 as a Virtual Appliance 1/18/19 11:08:47 AM
Now if I try for test to import this generated OVA to the same environment
with this script:
https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/examples/upload...
I get a duplicate error, see below. Is it a limitation of the python script
or what?
If I go from the gui and I chose "Virtual Appliance (OVA)" as source and
the same host and file path I can successfully import the OVA, provided I
change the name of the VM to be imported.
Thanks,
Gianluca
[root@hcinode1 ~]# ./upload_ova_as_vm.py /export/c7atomic1.ova Default
vmstore
Connecting...
Creating disk...
{'capacityAllocationUnits': 'byte * 2^30', 'capacity': '10', 'description':
'Auto-generated for Export To OVA', 'pass-discard': 'false', 'format': '
http://www.gnome.org/~markmc/qcow-image-format.html', 'volume-type':
'Sparse', 'boot': 'true', 'disk-alias': 'GlanceDisk-5f429e6',
'disk-interface': 'VirtIO', 'volume-format': 'COW', 'cinder_volume_type':
'', 'disk-description': 'CentOS 7 Atomic Host Image v1802 for x86_64
(5f429e6)', 'parentRef': '', 'fileRef':
'c6b2e076-1519-433e-9b37-2005c9ce6d2e', 'populatedSize': '1401290752',
'disk_storage_type': 'IMAGE', 'diskId':
'e4f92226-0f56-4822-a622-d1ebff41df9f', 'wipe-after-delete': 'false'}
Traceback (most recent call last):
File "./upload_ova_as_vm.py", line 133, in <module>
name=target_storage_domain_name
File "/usr/lib64/python2.7/site-packages/ovirtsdk4/services.py", line
6794, in add
return self._internal_add(disk, headers, query, wait)
File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py", line 232,
in _internal_add
return future.wait() if wait else future
File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py", line 55,
in wait
return self._code(response)
File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py", line 229,
in callback
self._check_fault(response)
File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py", line 132,
in _check_fault
self._raise_error(response, body)
File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py", line 118,
in _raise_error
raise error
ovirtsdk4.Error: Fault reason is "Operation Failed". Fault detail is
"[Internal Engine Error]". HTTP response code is 400.
[root@hcinode1 ~]#
On engine:
2019-01-18 11:24:46,561+01 INFO
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (default task-26)
[3b08d759-27d3-42f3-9eee-95c135e88a7b] BaseAsyncTask::startPollingTask:
Starting to poll task '81fbeac3-2c58-4f8a-a3da-44bdbe585beb'.
2019-01-18 11:24:46,691+01 ERROR
[org.ovirt.engine.core.bll.storage.disk.AddDiskCommand] (default task-26)
[3b08d759-27d3-42f3-9eee-95c135e88a7b] Command
'org.ovirt.engine.core.bll.storage.disk.AddDiskCommand' failed:
CallableStatementCallback; SQL [{call insertpermission(?, ?, ?, ?,
?)}ERROR: duplicate key value violates unique constraint
"idx_combined_ad_role_object"
Detail: Key (ad_element_id, role_id,
object_id)=(9d37881c-1991-11e9-b002-00163e7cb696,
def0000a-0000-0000-0000-def00000000b, e4f92226-0f56-4822-a622-d1ebff41df9f)
already exists.
Where: SQL statement "INSERT INTO permissions (
ad_element_id,
id,
role_id,
object_id,
object_type_id
)
VALUES (
v_ad_element_id,
v_id,
v_role_id,
v_object_id,
v_object_type_id
)"
PL/pgSQL function insertpermission(uuid,uuid,uuid,uuid,integer) line 3 at
SQL statement; nested exception is org.postgresql.util.PSQLException:
ERROR: duplicate key value violates unique constraint
"idx_combined_ad_role_object"
Detail: Key (ad_element_id, role_id,
object_id)=(9d37881c-1991-11e9-b002-00163e7cb696,
def0000a-0000-0000-0000-def00000000b, e4f92226-0f56-4822-a622-d1ebff41df9f)
already exists.
Where: SQL statement "INSERT INTO permissions (
ad_element_id,
id,
role_id,
object_id,
object_type_id
)
VALUES (
v_ad_element_id,
v_id,
v_role_id,
v_object_id,
v_object_type_id
)"
PL/pgSQL function insertpermission(uuid,uuid,uuid,uuid,integer) line 3 at
SQL statement
2019-01-18 11:24:46,691+01 ERROR
[org.ovirt.engine.core.bll.storage.disk.AddDiskCommand] (default task-26)
[3b08d759-27d3-42f3-9eee-95c135e88a7b] Exception:
org.springframework.dao.DuplicateKeyException: CallableStatementCallback;
SQL [{call insertpermission(?, ?, ?, ?, ?)}ERROR: duplicate key value
violates unique constraint "idx_combined_ad_role_object"
Detail: Key (ad_element_id, role_id,
object_id)=(9d37881c-1991-11e9-b002-00163e7cb696,
def0000a-0000-0000-0000-def00000000b, e4f92226-0f56-4822-a622-d1ebff41df9f)
already exists.
Where: SQL statement "INSERT INTO permissions (
ad_element_id,
id,
role_id,
object_id,
object_type_id
)
VALUES (
v_ad_element_id,
v_id,
v_role_id,
v_object_id,
v_object_type_id
)"
PL/pgSQL function insertpermission(uuid,uuid,uuid,uuid,integer) line 3 at
SQL statement; nested exception is org.postgresql.util.PSQLException:
ERROR: duplicate key value violates unique constraint
"idx_combined_ad_role_object"
Detail: Key (ad_element_id, role_id,
object_id)=(9d37881c-1991-11e9-b002-00163e7cb696,
def0000a-0000-0000-0000-def00000000b, e4f92226-0f56-4822-a622-d1ebff41df9f)
already exists.
Where: SQL statement "INSERT INTO permissions (
ad_element_id,
id,
role_id,
object_id,
object_type_id
)
VALUES (
v_ad_element_id,
v_id,
v_role_id,
v_object_id,
v_object_type_id
)"
PL/pgSQL function insertpermission(uuid,uuid,uuid,uuid,integer) line 3 at
SQL statement
at
org.springframework.jdbc.support.SQLErrorCodeSQLExceptionTranslator.doTranslate(SQLErrorCodeSQLExceptionTranslator.java:243)
[spring-jdbc.jar:5.0.4.RELEASE]
at
org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:72)
[spring-jdbc.jar:5.0.4.RELEASE]
at
org.springframework.jdbc.core.JdbcTemplate.translateException(JdbcTemplate.java:1402)
[spring-jdbc.jar:5.0.4.RELEASE]
at
org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:1065)
[spring-jdbc.jar:5.0.4.RELEASE]
at
org.springframework.jdbc.core.JdbcTemplate.call(JdbcTemplate.java:1104)
[spring-jdbc.jar:5.0.4.RELEASE]
at
org.springframework.jdbc.core.simple.AbstractJdbcCall.executeCallInternal(AbstractJdbcCall.java:414)
[spring-jdbc.jar:5.0.4.RELEASE]
at
org.springframework.jdbc.core.simple.AbstractJdbcCall.doExecute(AbstractJdbcCall.java:374)
[spring-jdbc.jar:5.0.4.RELEASE]
at
org.springframework.jdbc.core.simple.SimpleJdbcCall.execute(SimpleJdbcCall.java:198)
[spring-jdbc.jar:5.0.4.RELEASE]
at
org.ovirt.engine.core.dal.dbbroker.SimpleJdbcCallsHandler.executeImpl(SimpleJdbcCallsHandler.java:135)
[dal.jar:]
at
org.ovirt.engine.core.dal.dbbroker.SimpleJdbcCallsHandler.executeImpl(SimpleJdbcCallsHandler.java:130)
[dal.jar:]
at
org.ovirt.engine.core.dal.dbbroker.SimpleJdbcCallsHandler.executeModification(SimpleJdbcCallsHandler.java:76)
[dal.jar:]
at
org.ovirt.engine.core.dao.PermissionDaoImpl.save(PermissionDaoImpl.java:256)
[dal.jar:]
at
org.ovirt.engine.core.dao.PermissionDaoImpl.save(PermissionDaoImpl.java:22)
[dal.jar:]
at
org.ovirt.engine.core.bll.MultiLevelAdministrationHandler.addPermission(MultiLevelAdministrationHandler.java:67)
[bll.jar:]
at
org.ovirt.engine.core.bll.storage.disk.AddDiskCommand.addDiskPermissions(AddDiskCommand.java:628)
[bll.jar:]
at
org.ovirt.engine.core.bll.storage.disk.AddDiskCommand.createDiskBasedOnImage(AddDiskCommand.java:558)
[bll.jar:]
at
org.ovirt.engine.core.bll.storage.disk.AddDiskCommand.executeVmCommand(AddDiskCommand.java:429)
[bll.jar:]
at
org.ovirt.engine.core.bll.VmCommand.executeCommand(VmCommand.java:158)
[bll.jar:]
at
org.ovirt.engine.core.bll.CommandBase.executeWithoutTransaction(CommandBase.java:1147)
[bll.jar:]
at
org.ovirt.engine.core.bll.CommandBase.executeActionInTransactionScope(CommandBase.java:1305)
[bll.jar:]
at
org.ovirt.engine.core.bll.CommandBase.runInTransaction(CommandBase.java:1954)
[bll.jar:]
at
org.ovirt.engine.core.utils.transaction.TransactionSupport.executeInSuppressed(TransactionSupport.java:164)
[utils.jar:]
at
org.ovirt.engine.core.utils.transaction.TransactionSupport.executeInScope(TransactionSupport.java:103)
[utils.jar:]
at
org.ovirt.engine.core.bll.CommandBase.execute(CommandBase.java:1365)
[bll.jar:]
at
org.ovirt.engine.core.bll.CommandBase.executeAction(CommandBase.java:413)
[bll.jar:]
at
org.ovirt.engine.core.bll.executor.DefaultBackendActionExecutor.execute(DefaultBackendActionExecutor.java:13)
[bll.jar:]
at org.ovirt.engine.core.bll.Backend.runAction(Backend.java:450)
[bll.jar:]
at
org.ovirt.engine.core.bll.Backend.runActionImpl(Backend.java:432) [bll.jar:]
at org.ovirt.engine.core.bll.Backend.runAction(Backend.java:387)
[bll.jar:]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
[rt.jar:1.8.0_191]
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
[rt.jar:1.8.0_191]
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[rt.jar:1.8.0_191]
at java.lang.reflect.Method.invoke(Method.java:498)
[rt.jar:1.8.0_191]
at
org.jboss.as.ee.component.ManagedReferenceMethodInterceptor.processInvocation(ManagedReferenceMethodInterceptor.java:52)
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
at
org.jboss.invocation.InterceptorContext$Invocation.proceed(InterceptorContext.java:509)
at
org.jboss.as.weld.ejb.DelegatingInterceptorInvocationContext.proceed(DelegatingInterceptorInvocationContext.java:92)
[wildfly-weld-ejb-14.0.1.Final.jar:14.0.1.Final]
at
org.jboss.weld.interceptor.proxy.WeldInvocationContextImpl.interceptorChainCompleted(WeldInvocationContextImpl.java:107)
[weld-core-impl-3.0.5.Final.jar:3.0.5.Final]
. . .
at
org.jboss.threads.EnhancedQueueExecutor$ThreadBody.run(EnhancedQueueExecutor.java:1378)
at java.lang.Thread.run(Thread.java:748) [rt.jar:1.8.0_191]
Caused by: org.postgresql.util.PSQLException: ERROR: duplicate key value
violates unique constraint "idx_combined_a
d_role_object"
Detail: Key (ad_element_id, role_id,
object_id)=(9d37881c-1991-11e9-b002-00163e7cb696, def0000a-0000-0000-0000-de
f00000000b, e4f92226-0f56-4822-a622-d1ebff41df9f) already exists.
6 years, 4 months