encrypted GENEVE traffic
by Pavel Nakonechnyi
Dear oVirt Community,
From my understanding oVirt does not support Open vSwitch IPSEC tunneling for GENEVE traffic (which is described on pages http://docs.openvswitch.org/en/latest/howto/ipsec/ and http://docs.openvswitch.org/en/latest/tutorials/ipsec/).
Are there plans to introduce such support? (or explicitly not to..)
Is it possible to somehow manually configure such tunneling for existing virtual networks? (even in a limited way)
Alternatively, is it possible to deploy oVirt on top of the tunneled (i.e. via VXLAN, IPSec) interfaces? This will allow to encrypt all management traffic.
Such requirement arises when using oVirt deployment on third-party premises with untrusted network.
Thank in advance for any clarifications. :)
--
WBR, Pavel
+32478910884
3 years, 10 months
Re: Constantly XFS in memory corruption inside VMs
by Strahil Nikolov
Damn...
You are using EFI boot. Does this happen only to EFI machines ?
Did you notice if only EL 8 is affected ?
Best Regards,
Strahil Nikolov
В неделя, 29 ноември 2020 г., 19:36:09 Гринуич+2, Vinícius Ferrão <ferrao(a)versatushpc.com.br> написа:
Yes!
I have a live VM right now that will de dead on a reboot:
[root@kontainerscomk ~]# cat /etc/*release
NAME="Red Hat Enterprise Linux"
VERSION="8.3 (Ootpa)"
ID="rhel"
ID_LIKE="fedora"
VERSION_ID="8.3"
PLATFORM_ID="platform:el8"
PRETTY_NAME="Red Hat Enterprise Linux 8.3 (Ootpa)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:redhat:enterprise_linux:8.3:GA"
HOME_URL="https://www.redhat.com/"
BUG_REPORT_URL="https://bugzilla.redhat.com/"
REDHAT_BUGZILLA_PRODUCT="Red Hat Enterprise Linux 8"
REDHAT_BUGZILLA_PRODUCT_VERSION=8.3
REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux"
REDHAT_SUPPORT_PRODUCT_VERSION="8.3"
Red Hat Enterprise Linux release 8.3 (Ootpa)
Red Hat Enterprise Linux release 8.3 (Ootpa)
[root@kontainerscomk ~]# sysctl -a | grep dirty
vm.dirty_background_bytes = 0
vm.dirty_background_ratio = 10
vm.dirty_bytes = 0
vm.dirty_expire_centisecs = 3000
vm.dirty_ratio = 30
vm.dirty_writeback_centisecs = 500
vm.dirtytime_expire_seconds = 43200
[root@kontainerscomk ~]# xfs_db -r /dev/dm-0
xfs_db: /dev/dm-0 is not a valid XFS filesystem (unexpected SB magic number 0xa82a0000)
Use -F to force a read attempt.
[root@kontainerscomk ~]# xfs_db -r /dev/dm-0 -F
xfs_db: /dev/dm-0 is not a valid XFS filesystem (unexpected SB magic number 0xa82a0000)
xfs_db: size check failed
xfs_db: V1 inodes unsupported. Please try an older xfsprogs.
[root@kontainerscomk ~]# cat /etc/fstab
#
# /etc/fstab
# Created by anaconda on Thu Nov 19 22:40:39 2020
#
# Accessible filesystems, by reference, are maintained under '/dev/disk/'.
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info.
#
# After editing this file, run 'systemctl daemon-reload' to update systemd
# units generated from this file.
#
/dev/mapper/rhel-root / xfs defaults 0 0
UUID=ad84d1ea-c9cc-4b22-8338-d1a6b2c7d27e /boot xfs defaults 0 0
UUID=4642-2FF6 /boot/efi vfat umask=0077,shortname=winnt 0 2
/dev/mapper/rhel-swap none swap defaults 0 0
Thanks,
-----Original Message-----
From: Strahil Nikolov <hunter86_bg(a)yahoo.com>
Sent: Sunday, November 29, 2020 2:33 PM
To: Vinícius Ferrão <ferrao(a)versatushpc.com.br>
Cc: users <users(a)ovirt.org>
Subject: Re: [ovirt-users] Re: Constantly XFS in memory corruption inside VMs
Can you check the output on the VM that was affected:
# cat /etc/*release
# sysctl -a | grep dirty
Best Regards,
Strahil Nikolov
В неделя, 29 ноември 2020 г., 19:07:48 Гринуич+2, Vinícius Ferrão via Users <users(a)ovirt.org> написа:
Hi Strahil.
I’m not using barrier options on mount. It’s the default settings from CentOS install.
I have some additional findings, there’s a big number of discarded packages on the switch on the hypervisor interfaces.
Discards are OK as far as I know, I hope TCP handles this and do the proper retransmissions, but I ask if this may be related or not. Our storage is over NFS. My general expertise is with iSCSI and I’ve never seen this kind of issue with iSCSI, not that I’m aware of.
In other clusters, I’ve seen a high number of discards with iSCSI on XenServer 7.2 but there’s no corruption on the VMs there...
Thanks,
Sent from my iPhone
> On 29 Nov 2020, at 04:00, Strahil Nikolov <hunter86_bg(a)yahoo.com> wrote:
>
> Are you using "nobarrier" mount options in the VM ?
>
> If yes, can you try to remove the "nobarrrier" option.
>
>
> Best Regards,
> Strahil Nikolov
>
>
>
>
>
>
> В събота, 28 ноември 2020 г., 19:25:48 Гринуич+2, Vinícius Ferrão <ferrao(a)versatushpc.com.br> написа:
>
>
>
>
>
> Hi Strahil,
>
> I moved a running VM to other host, rebooted and no corruption was found. If there's any corruption it may be silent corruption... I've cases where the VM was new, just installed, run dnf -y update to get the updated packages, rebooted, and boom XFS corruption. So perhaps the motion process isn't the one to blame.
>
> But, in fact, I remember when moving a VM that it went down during the process and when I rebooted it was corrupted. But this may not seems related. It perhaps was already in a inconsistent state.
>
> Anyway, here's the mount options:
>
> Host1:
> 192.168.10.14:/mnt/pool0/ovirt/vm on
> /rhev/data-center/mnt/192.168.10.14:_mnt_pool0_ovirt_vm type nfs4
> (rw,relatime,vers=4.1,rsize=131072,wsize=131072,namlen=255,soft,noshar
> ecache,proto=tcp,timeo=100,retrans=3,sec=sys,clientaddr=192.168.10.1,l
> ocal_lock=none,addr=192.168.10.14)
>
> Host2:
> 192.168.10.14:/mnt/pool0/ovirt/vm on
> /rhev/data-center/mnt/192.168.10.14:_mnt_pool0_ovirt_vm type nfs4
> (rw,relatime,vers=4.1,rsize=131072,wsize=131072,namlen=255,soft,noshar
> ecache,proto=tcp,timeo=100,retrans=3,sec=sys,clientaddr=192.168.10.1,l
> ocal_lock=none,addr=192.168.10.14)
>
> The options are the default ones. I haven't changed anything when configuring this cluster.
>
> Thanks.
>
>
>
> -----Original Message-----
> From: Strahil Nikolov <hunter86_bg(a)yahoo.com>
> Sent: Saturday, November 28, 2020 1:54 PM
> To: users <users(a)ovirt.org>; Vinícius Ferrão
> <ferrao(a)versatushpc.com.br>
> Subject: Re: [ovirt-users] Constantly XFS in memory corruption inside
> VMs
>
> Can you try with a test vm, if this happens after a Virtual Machine migration ?
>
> What are your mount options for the storage domain ?
>
> Best Regards,
> Strahil Nikolov
>
>
>
>
>
>
> В събота, 28 ноември 2020 г., 18:25:15 Гринуич+2, Vinícius Ferrão via Users <users(a)ovirt.org> написа:
>
>
>
>
>
>
>
>
> Hello,
>
>
>
> I’m trying to discover why an oVirt 4.4.3 Cluster with two hosts and NFS shared storage on TrueNAS 12.0 is constantly getting XFS corruption inside the VMs.
>
>
>
> For random reasons VM’s gets corrupted, sometimes halting it or just being silent corrupted and after a reboot the system is unable to boot due to “corruption of in-memory data detected”. Sometimes the corrupted data are “all zeroes”, sometimes there’s data there. In extreme cases the XFS superblock 0 get’s corrupted and the system cannot even detect a XFS partition anymore since the magic XFS key is corrupted on the first blocks of the virtual disk.
>
>
>
> This is happening for a month now. We had to rollback some backups, and I don’t trust anymore on the state of the VMs.
>
>
>
> Using xfs_db I can see that some VM’s have corrupted superblocks but the VM is up. One in specific, was with sb0 corrupted, so I knew when a reboot kicks in the machine will be gone, and that’s exactly what happened.
>
>
>
> Another day I was just installing a new CentOS 8 VM for random reasons, and after running dnf -y update and a reboot the VM was corrupted needing XFS repair. That was an extreme case.
>
>
>
> So, I’ve looked on the TrueNAS logs, and there’s apparently nothing wrong on the system. No errors logged on dmesg, nothing on /var/log/messages and no errors on the “zpools”, not even after scrub operations. On the switch, a Catalyst 2960X, we’ve been monitoring it and all it’s interfaces. There are no “up and down” and zero errors on all interfaces (we have a 4x Port LACP on the TrueNAS side and 2x Port LACP on each hosts), everything seems to be fine. The only metric that I was unable to get is “dropped packages”, but I’m don’t know if this can be an issue or not.
>
>
>
> Finally, on oVirt, I can’t find anything either. I looked on /var/log/messages and /var/log/sanlock.log but there’s nothing that I found suspicious.
>
>
>
> Is there’s anyone out there experiencing this? Our VM’s are mainly CentOS 7/8 with XFS, there’s 3 Windows VM’s that does not seems to be affected, everything else is affected.
>
>
>
> Thanks all.
>
>
>
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org Privacy
> Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/VLYSE7HC
> FNWTWFZZTL2EJHV36OENHUGB/
_______________________________________________
Users mailing list -- users(a)ovirt.org
To unsubscribe send an email to users-leave(a)ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CZ5E55LJMA7...
3 years, 10 months
Unable to live migrate a VM from 4.4.2 to 4.4.3 CentOS Linux host
by Gianluca Cecchi
Hello,
I was able to update an external CentOS Linux 8.2 standalone engine from
4.4.2 to 4.4.3 (see dedicated thread).
Then I was able to put into maintenance one 4.4.2 host (CentOS Linux 8.2
based, not ovirt node ng) and run:
[root@ov301 ~]# dnf update
Last metadata expiration check: 0:27:11 ago on Wed 11 Nov 2020 08:48:04 PM
CET.
Dependencies resolved.
======================================================================================================================
Package Arch Version
Repository Size
======================================================================================================================
Installing:
kernel x86_64 4.18.0-193.28.1.el8_2
BaseOS 2.8 M
kernel-core x86_64 4.18.0-193.28.1.el8_2
BaseOS 28 M
kernel-modules x86_64 4.18.0-193.28.1.el8_2
BaseOS 23 M
ovirt-ansible-collection noarch 1.2.1-1.el8
ovirt-4.4 276 k
replacing ovirt-ansible-engine-setup.noarch 1.2.4-1.el8
replacing ovirt-ansible-hosted-engine-setup.noarch 1.1.8-1.el8
Upgrading:
ansible noarch 2.9.15-2.el8
ovirt-4.4-centos-ovirt44 17 M
bpftool x86_64 4.18.0-193.28.1.el8_2
BaseOS 3.4 M
cockpit-ovirt-dashboard noarch 0.14.13-1.el8
ovirt-4.4 3.5 M
ioprocess x86_64 1.4.2-1.el8
ovirt-4.4 37 k
kernel-tools x86_64 4.18.0-193.28.1.el8_2
BaseOS 3.0 M
kernel-tools-libs x86_64 4.18.0-193.28.1.el8_2
BaseOS 2.8 M
libiscsi x86_64 1.18.0-8.module_el8.2.0+524+f765f7e0
AppStream 89 k
nftables x86_64 1:0.9.3-12.el8_2.1
BaseOS 311 k
ovirt-hosted-engine-ha noarch 2.4.5-1.el8
ovirt-4.4 325 k
ovirt-hosted-engine-setup noarch 2.4.8-1.el8
ovirt-4.4 227 k
ovirt-imageio-client x86_64 2.1.1-1.el8
ovirt-4.4 21 k
ovirt-imageio-common x86_64 2.1.1-1.el8
ovirt-4.4 155 k
ovirt-imageio-daemon x86_64 2.1.1-1.el8
ovirt-4.4 15 k
ovirt-provider-ovn-driver noarch 1.2.32-1.el8
ovirt-4.4 27 k
ovirt-release44 noarch 4.4.3-1.el8
ovirt-4.4 17 k
python3-ioprocess x86_64 1.4.2-1.el8
ovirt-4.4 33 k
python3-nftables x86_64 1:0.9.3-12.el8_2.1
BaseOS 25 k
python3-ovirt-engine-sdk4 x86_64 4.4.6-1.el8
ovirt-4.4 560 k
python3-perf x86_64 4.18.0-193.28.1.el8_2
BaseOS 2.9 M
python3-pyasn1 noarch 0.4.6-3.el8
ovirt-4.4-centos-opstools 140 k
python3-pyasn1-modules noarch 0.4.6-3.el8
ovirt-4.4-centos-opstools 151 k
qemu-img x86_64 15:4.2.0-29.el8.6
ovirt-4.4-advanced-virtualization 1.0 M
qemu-kvm x86_64 15:4.2.0-29.el8.6
ovirt-4.4-advanced-virtualization 118 k
qemu-kvm-block-curl x86_64 15:4.2.0-29.el8.6
ovirt-4.4-advanced-virtualization 129 k
qemu-kvm-block-gluster x86_64 15:4.2.0-29.el8.6
ovirt-4.4-advanced-virtualization 131 k
qemu-kvm-block-iscsi x86_64 15:4.2.0-29.el8.6
ovirt-4.4-advanced-virtualization 136 k
qemu-kvm-block-rbd x86_64 15:4.2.0-29.el8.6
ovirt-4.4-advanced-virtualization 130 k
qemu-kvm-block-ssh x86_64 15:4.2.0-29.el8.6
ovirt-4.4-advanced-virtualization 131 k
qemu-kvm-common x86_64 15:4.2.0-29.el8.6
ovirt-4.4-advanced-virtualization 1.2 M
qemu-kvm-core x86_64 15:4.2.0-29.el8.6
ovirt-4.4-advanced-virtualization 3.4 M
selinux-policy noarch 3.14.3-41.el8_2.8
BaseOS 615 k
selinux-policy-targeted noarch 3.14.3-41.el8_2.8
BaseOS 15 M
spice-server x86_64 0.14.2-1.el8_2.1
AppStream 404 k
tzdata noarch 2020d-1.el8
BaseOS 471 k
vdsm x86_64 4.40.35.1-1.el8
ovirt-4.4 1.4 M
vdsm-api noarch 4.40.35.1-1.el8
ovirt-4.4 106 k
vdsm-client noarch 4.40.35.1-1.el8
ovirt-4.4 24 k
vdsm-common noarch 4.40.35.1-1.el8
ovirt-4.4 136 k
vdsm-hook-ethtool-options noarch 4.40.35.1-1.el8
ovirt-4.4 9.8 k
vdsm-hook-fcoe noarch 4.40.35.1-1.el8
ovirt-4.4 10 k
vdsm-hook-openstacknet noarch 4.40.35.1-1.el8
ovirt-4.4 18 k
vdsm-hook-vhostmd noarch 4.40.35.1-1.el8
ovirt-4.4 17 k
vdsm-hook-vmfex-dev noarch 4.40.35.1-1.el8
ovirt-4.4 11 k
vdsm-http noarch 4.40.35.1-1.el8
ovirt-4.4 15 k
vdsm-jsonrpc noarch 4.40.35.1-1.el8
ovirt-4.4 31 k
vdsm-network x86_64 4.40.35.1-1.el8
ovirt-4.4 331 k
vdsm-python noarch 4.40.35.1-1.el8
ovirt-4.4 1.3 M
vdsm-yajsonrpc noarch 4.40.35.1-1.el8
ovirt-4.4 40 k
Installing dependencies:
NetworkManager-ovs x86_64 1:1.22.14-1.el8
ovirt-4.4-copr:copr.fedorainfracloud.org:networkmanager:NetworkManager-1.22
144 k
Transaction Summary
======================================================================================================================
Install 5 Packages
Upgrade 48 Packages
Total download size: 116 M
After reboot I can activate the host (strange that I see many pop up
messages about "finished activating host") and the host is shown as
OS Version: RHEL - 8.2 - 2.2004.0.2.el8
OS Description: CentOS Linux 8 (Core)
Kernel Version: 4.18.0 - 193.28.1.el8_2.x86_64
KVM Version: 4.2.0 - 29.el8.6
LIBVIRT Version: libvirt-6.0.0-25.2.el8
VDSM Version: vdsm-4.40.35.1-1.el8
SPICE Version: 0.14.2 - 1.el8_2.1
GlusterFS Version: [N/A]
CEPH Version: librbd1-12.2.7-9.el8
Open vSwitch Version: [N/A]
Nmstate Version: nmstate-0.2.10-1.el8
Kernel Features: MDS: (Vulnerable: Clear CPU buffers attempted, no
microcode; SMT vulnerable), L1TF: (Mitigation: PTE Inversion; VMX:
conditional cache flushes, SMT vulnerable), SRBDS: (Not affected),
MELTDOWN: (Mitigation: PTI), SPECTRE_V1: (Mitigation: usercopy/swapgs
barriers and __user pointer sanitization), SPECTRE_V2: (Mitigation: Full
generic retpoline, IBPB: conditional, IBRS_FW, STIBP: conditional, RSB
filling), ITLB_MULTIHIT: (KVM: Mitigation: Split huge pages),
TSX_ASYNC_ABORT: (Not affected), SPEC_STORE_BYPASS: (Mitigation:
Speculative Store Bypass disabled via prctl and seccomp)
VNC Encryption: Disabled
FIPS mode enabled: Disabled
while another host still in 4.4.2:
OS Version: RHEL - 8.2 - 2.2004.0.2.el8
OS Description: CentOS Linux 8 (Core)
Kernel Version: 4.18.0 - 193.19.1.el8_2.x86_64
KVM Version: 4.2.0 - 29.el8.3
LIBVIRT Version: libvirt-6.0.0-25.2.el8
VDSM Version: vdsm-4.40.26.3-1.el8
SPICE Version: 0.14.2 - 1.el8
GlusterFS Version: [N/A]
CEPH Version: librbd1-12.2.7-9.el8
Open vSwitch Version: [N/A]
Nmstate Version: nmstate-0.2.10-1.el8
Kernel Features: MDS: (Vulnerable: Clear CPU buffers attempted, no
microcode; SMT vulnerable), L1TF: (Mitigation: PTE Inversion; VMX:
conditional cache flushes, SMT vulnerable), SRBDS: (Not affected),
MELTDOWN: (Mitigation: PTI), SPECTRE_V1: (Mitigation: usercopy/swapgs
barriers and __user pointer sanitization), SPECTRE_V2: (Mitigation: Full
generic retpoline, IBPB: conditional, IBRS_FW, STIBP: conditional, RSB
filling), ITLB_MULTIHIT: (KVM: Mitigation: Split huge pages),
TSX_ASYNC_ABORT: (Not affected), SPEC_STORE_BYPASS: (Mitigation:
Speculative Store Bypass disabled via prctl and seccomp)
VNC Encryption: Disabled
FIPS mode enabled: Disabled
But if I try to move away VMs from the 4.4.2 host to the 4.4.3 one I get
error:
Failed to migrate VM c8client to Host ov301 . Trying to migrate to another
Host.
(btw: there is no other active host; there is a ov300 host that is in
maintenance)
No available host was found to migrate VM c8client to.
It seems the root error in engine.log is:
2020-11-11 21:44:42,487+01 ERROR
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
(ForkJoinPool-1-worker-11) [] Migration of VM 'c8client' to host 'ov301'
failed: VM destroyed during the startup.
On target host in /var/log/libvirt/qemu/c8clinet.log I see:
2020-11-11 20:44:40.981+0000: shutting down, reason=failed
In target vdsm.log
2020-11-11 21:44:39,958+0100 INFO (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC
call VM.migrationCreate took more than 1.00
seconds to succeed: 1.97 (__init__:316)
2020-11-11 21:44:40,230+0100 INFO (periodic/3) [vdsm.api] START
repoStats(domains=()) from=internal,
task_id=cb51fd4a-09d3-4d77-821b-391da2467487 (api:48)
2020-11-11 21:44:40,231+0100 INFO (periodic/3) [vdsm.api] FINISH repoStats
return={'fa33df49-b09d-4f86-9719-ede649542c21': {'code': 0, 'lastCheck':
'4.1', 'delay': '0.000836715', 'valid': True, 'version': 4, 'acquired':
True, 'actual': True}} from=internal,
task_id=cb51fd4a-09d3-4d77-821b-391da2467487 (api:54)
2020-11-11 21:44:41,929+0100 INFO (jsonrpc/5) [api.virt] START
destroy(gracefulAttempts=1) from=::ffff:10.4.192.32,52266,
vmId=c95da734-7ed1-4caa-bacb-3fa24f4efb56 (api:48)
2020-11-11 21:44:41,930+0100 INFO (jsonrpc/5) [virt.vm]
(vmId='c95da734-7ed1-4caa-bacb-3fa24f4efb56') Release VM resources (vm:4666)
2020-11-11 21:44:41,930+0100 INFO (jsonrpc/5) [virt.vm]
(vmId='c95da734-7ed1-4caa-bacb-3fa24f4efb56') Stopping connection
(guestagent:444)
2020-11-11 21:44:41,930+0100 INFO (jsonrpc/5) [vdsm.api] START
teardownImage(sdUUID='fa33df49-b09d-4f86-9719-ede649542c21',
spUUID='ef17cad6-7724-4cd8-96e3-9af6e529db51',
imgUUID='ff10a405-cc61-4d00-a83f-3ee04b19f381', volUUID=None)
from=::ffff:10.4.192.32,52266, task_id=177461c0-83d6-4c90-9c5c-3cc8ee9150c7
(api:48)
It seems that during the host update the OVN configuration has not been
maintained.
Right now all my active VMs are with at least a vnic on OVN so I cannot
test the scenario of migrating a VM without OVN based vnic.
In fact on engine I see only the currently active host in 4.4.2 (ov200) and
another host that is in maintenance (it is still in 4.3.10; I wanted to
update to 4.4.2 but I realized that 4.4.3 has been out...):
[root@ovmgr1 ovirt-engine]# ovn-sbctl show
Chassis "6a46b802-5a50-4df5-b1af-e73f58a57164"
hostname: "ov200.mydomain"
Encap geneve
ip: "10.4.192.32"
options: {csum="true"}
Port_Binding "2ae7391b-4297-4247-a315-99312f6392e6"
Port_Binding "c1ec60a4-b4f3-4cb5-8985-43c086156e83"
Port_Binding "174b69f8-00ed-4e25-96fc-7db11ea8a8b9"
Port_Binding "66359e79-56c4-47e0-8196-2241706329f6"
Port_Binding "ccbd6188-78eb-437b-9df9-9929e272974b"
Chassis "ddecf0da-4708-4f93-958b-6af365a5eeca"
hostname: "ov300.mydomain"
Encap geneve
ip: "10.4.192.33"
options: {csum="true"}
[root@ovmgr1 ovirt-engine]#
Any hint about the reason of losing OVN config for ov301 and the correct
procedure to get it again and persiste future updates?
NOTE: this was a cluster in 4.3.10 and I updated it to 4.4.2 and I noticed
that the OVN config was not retained and I had to run on hosts:
[root@ov200 ~]# vdsm-tool ovn-config engine_ip ov200_ip_on_mgmt
Using default PKI files
Created symlink
/etc/systemd/system/multi-user.target.wants/openvswitch.service →
/usr/lib/systemd/system/openvswitch.service.
Created symlink
/etc/systemd/system/multi-user.target.wants/ovn-controller.service →
/usr/lib/systemd/system/ovn-controller.service.
[root@ov200 ~]#
Now it seems the problem persists...
Why do I have to run each time?
Gianluca
3 years, 11 months
oVirt 4.4: Self-hosted engine deployment fails with backup restore from 4.3 engine
by Oliver Leinfelder
Hi there,
I'm a bit puzzled about an possible upgrade paths from a 4.3 cluster to
version 4.4 in a self-hosted engine environment.
My idea was:
Set up a new host with a clean ovirt node 4.4 installation, then deploy the
hosted engine on this with a restored backup from the production cluster
and go from there.
This however fails with the following error:
2020-05-27 00:17:08,886+0200 DEBUG
otopi.ovirt_hosted_engine_setup.ansible_utils
ansible_utils._process_output:103 {'msg': 'non-zero return code', 'cmd':
['engine-setup', '--accept-defaults',
'--config-append=/root/ovirt-engine-answers'], 'stdout': "[ INFO ] Stage:
Initializing\n[ INFO ] Stage: Environment setup\n C
onfiguration files: /etc/ovirt-engine-setup.conf.d/10-packaging-jboss.conf,
/etc/ovirt-engine-setup.conf.d/10-packaging.conf,
/etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf,
/root/ovirt-engine-answers\n Log file:
/var/log/ovirt-engine/setup/ovirt-engine-setup-20200527001657-fyeueu.log\n
Version: otop
i-1.9.1 (otopi-1.9.1-1.el8)\n[ INFO ] DNF Downloading 1 files, 0.00KB\n[
INFO ] DNF Downloaded CentOS-8 - AppStream\n[ INFO ] DNF Downloading 1
files, 0.00KB\n[ INFO ] DNF Downloaded CentOS-8 - Base\n[ INFO ] DNF
Downloading 1 files, 0.00KB\n
[...]
... anwsers from backup config follow ....
[...]
2020-05-27 00:17:12,396+0200 DEBUG otopi.context context._executeMethod:145
method exception
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/otopi/context.py", line 132, in
_executeMethod
method['method']()
File
"/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/gr-he-ansiblesetup/core/misc.py",
line 403, in _closeup
r = ah.run()
File
"/usr/lib/python3.6/site-packages/ovirt_hosted_engine_setup/ansible_utils.py",
line 229, in run
raise RuntimeError(_('Failed executing ansible-playbook'))
Is this approach (restoring from 4.3) generally supposed to work? If not,
what is the appropriate upgrade path?
Thank you!
Regards
Oli
3 years, 11 months
ovirt 4.3 - locked image vm - unable to remove a failed deploy of a guest dom
by 3c.monitor@gruppofilippetti.it
Hi all.
I've deployed a VM from a corrupted template (it's disk is missing, but I've checked it later...).
My Software Version is:4.3
Now, I have an unmanaged VM in inventory and unable to remove it too.
It's reference is "locked image".
I've restarted ovirt-engine many times on self-hosted engine and hosts too, but no benefits.
So, what's now?
I've also consulted: https://access.redhat.com/solutions/396753
but still no results.
No tasks or items results to be "locked"...
Any other ideas?
Thanks a lot.
3 years, 12 months
How to install win-virtio Drivers (aka guest tools) silently
by t.hofmann@klett-it.net
Hi,
i am currently trying to automate an upgrade process from the "old" oVirt 4.3 Guest Tools new one which should using starting with oVirt 4.4
To achieve this I have to install the win-virtio tools quietly. This works even if I use the parameters /install /passive /norestart. But then the oVirt Guest Agent is not installed. Is there a parameter or another way to install it during a silent installation?
Thanks in advance.
- Tim
3 years, 12 months
Cannot SSH into the self hosted host (going through tutorial)
by jenia.ivlev@gmail.com
Hello.
I want to install oVirt on my PC and for this purpose I'm following the tutorial which instructed me to make a VM and install there the oVirt self hosted host: https://www.ovirt.org/documentation/installing_ovirt_as_a_self-hosted_eng...
I cannot SSH to this new oVirt install - I get the error "Connection refused" (Even though I see something listening on port 22 by running `netstat -an`). So I tried stopping `sshd`and I got the same error. Tried to do all of this with `telnet 192.168.122.1 22`, I get the same error "Connection refused".
Also, Yum returns "could not resolve host: https://mirrors.fedoraproject.org"
I can ping the machine though from my PC.
I turned off `firewalld`. `iproute -S` returns `ACCEPT` for everything.
My PC is running archlinux and I installed the new VM using "Virtual Machine Manager". Normally, I can easily SSH to any VM I create. Maybe someone know what I'm doing wrong?
Thanks kindly
Jenia
P.S.
Running `nodectl check` returns "ok" for all.
4 years
Unable to move or copy disks
by suporte@logicworks.pt
Hi,
I was trying to move a disk between glusters domain storage without success.
# gluster --version
glusterfs 6.10
Now have a VM with this message:
The VM has snapshot(s) with disk(s) in illegal status. Please don't shutdown the VM before successfully retrying the snapshot delete
oVirt verison is
Version 4.3.10.4-1.el7
I cannot delete the snapshot. What should I do?
hanks
--
Jose Ferradeira
http://www.logicworks.pt
4 years