Re: VM Networking
by Dominik Holler
On Tue, Jun 2, 2020 at 3:34 PM Anton Louw via Users <users(a)ovirt.org> wrote:
>
>
> Hi Everybody,
>
>
>
> I had a strange incident today.. One of my nodes restarted (Still busy
> investigating why), but my VMs powered own and moved to different nodes,
> the only issue is, it seemed to have changed the vlan ID on the NICs?
>
Which VLANs changed during the reboot of the host?
Just the VLANs on the hosts NICs, in a way the VMs are still connected to
the correct VLANs?
> Has anybody seen this before?
>
It is possible to run a network configuration, which does not persist and
is reverted during reboot this way.
Please find more details in
http://ovirt.github.io/ovirt-engine-api-model/master/#services/host/metho...
or the
"Save network configuration" checkbox in the "Setup Host Networks" dialog.
In recent oVirt versions this checkbox cannot be unselected anymore to
avoid unexpected situations.
> I am just trying to establish if this was changed on the node networking
> side, or inside of the VM.
>
>
>
If a virtual NIC is connected to a logical network with a VLAN tag, there
is no intended way for the VM to modify this tag.
> Thank you
>
> *Anton Louw*
> *Cloud Engineer: Storage and Virtualization* at *Vox*
> ------------------------------
> *T:* 087 805 0000 | *D:* 087 805 1572
> *M:* N/A
> *E:* anton.louw(a)voxtelecom.co.za
> *A:* Rutherford Estate, 1 Scott Street, Waverley, Johannesburg
> www.vox.co.za
>
> [image: image.png]
>
>
>
>
>
>
>
> <https://www.facebook.com/voxtelecomZA>
>
> [image: image.png]
>
>
>
>
>
>
>
> <https://www.twitter.com/voxtelecom>
>
> [image: image.png]
>
>
>
>
>
>
>
> <https://www.instagram.com/voxtelecomza/>
>
> [image: image.png]
>
>
>
>
>
>
>
> <https://www.linkedin.com/company/voxtelecom>
>
> [image: image.png]
>
>
>
>
>
>
>
> <https://www.youtube.com/user/VoxTelecom>
>
> [image: image.png]
>
>
>
>
>
>
>
> <https://www.vox.co.za/fibre/fibre-to-the-home/?prod=HOME>
> *Disclaimer*
>
> The contents of this email are confidential to the sender and the intended
> recipient. Unless the contents are clearly and entirely of a personal
> nature, they are subject to copyright in favour of the holding company of
> the Vox group of companies. Any recipient who receives this email in error
> should immediately report the error to the sender and permanently delete
> this email from all storage devices.
>
> This email has been scanned for viruses and malware, and may have been
> automatically archived by *Mimecast Ltd*, an innovator in Software as a
> Service (SaaS) for business. Providing a *safer* and *more useful* place
> for your human generated data. Specializing in; Security, archiving and
> compliance. To find out more Click Here
> <https://www.voxtelecom.co.za/security/mimecast/?prod=Enterprise>.
>
>
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/CP7GZUPSI7M...
>
3 years
VM Networking
by Anton Louw
Hi Everybody,
I had a strange incident today.. One of my nodes restarted (Still busy investigating why), but my VMs powered own and moved to different nodes, the only issue is, it seemed to have changed the vlan ID on the NICs? Has anybody seen this before? I am just trying to establish if this was changed on the node networking side, or inside of the VM.
Thank you
Anton Louw
Cloud Engineer: Storage and Virtualization
______________________________________
D: 087 805 1572 | M: N/A
A: Rutherford Estate, 1 Scott Street, Waverley, Johannesburg
anton.louw(a)voxtelecom.co.za
www.vox.co.za
3 years
Problem with oVirt 4.4
by minnie.du@vinchin.com
Hello,
We have met a problem when testing oVirt 4.4.
Our VM is on NFS storage. When testing the snapshot function of oVirt 4.4, we created snapshot 1 and then snapshot 2, but after clicking the delete button of snapshot 1, snapshot 1 failed to be deleted and the state of corresponding disk became illegal. Removing the snapshot in this state requires a lot of risky work in the background, leading to the inability to free up snapshot space. Long-term backups will cause the target VM to create a large number of unrecoverable snapshots, thus taking up a large amount of production storage. So we need your help.
3 years
Problems creating bricks for new Gluster storage 4.4
by Jaret Garcia
Hi guys I'm trying to deploy a new ovirt environmet based in version 4.4 so I set up the following:
1 engine sever (stand alone) Version : 4.4.0.3
3 host servers (32 GB RAM, 2CPUs 8 core Xeon 5160, 500GB OS drive, 1 storage 8TB drive and 1 SSD 256GB (I'm trying to create a brick with SSD cache with no success)
Detail of versions running on servers
[
OS Version:
RHEL - 8 - 1.1911.0.9.el8
OS Description:
oVirt Node 4.4.0
Kernel Version:
4.18.0 - 147.8.1.el8_1.x86_64
KVM Version:
4.1.0 - 23.el8.1
LIBVIRT Version:
libvirt-5.6.0-10.el8
VDSM Version:
vdsm-4.40.16-1.el8
SPICE Version:
0.14.2 - 1.el8
GlusterFS Version:
glusterfs-7.5-1.el8
CEPH Version:
librbd1-12.2.7-9.el8
Open vSwitch Version:
openvswitch-2.11.1-5.el8
Nmstate Version:
nmstate-0.2.10-1.el8
]
Engine server working fine, all 3 host servers already part of the default cluster as hypervisors, then when I try to create a brick in any of the servers, using the 8TB HHDD and SSD as cache the process fails, in the GUI it just says "failed to creat brick on host"
However in engine log as well as in brick-setup log I see: "err" : " Physical volume \"/dev/sdc\" still in use\n" "failed" : true,, (sdc in this case is the 8TB HHDD)
Attached files of both logs.
Thanks in advance,
Jaret Garcia
PACKET
Next Generation IT & Telecom
Tel. [ callto:+52%20%2855%29%2059898707 | +52 (55) 59898707 ]
Of. [ callto:+52%20%2855%29%2047441200 | +52 (55) 47441200 ] Ext. 1210
email: [ mailto:jaret.garcia@packet.mx | jaret.garcia(a)packet.mx ]
[ http://www.packet.mx/ | www.packet.mx ]
3 years
Mixing OS versions
by Stack Korora
Greetings,
We've been using Scientific Linux 7 quite successfully with oVirt for
years now. However, since there will not be a SL7 we are transitioning
new servers to CentOS8. I would like to add a new oVirt hypervisor node.
How bad of an idea is it to have a 8 system when the rest are 7 even
though the version of oVirt will be the same?
Thanks!
3 years
ovirt imageio problem...
by matteo fedeli
Hi! I' installed CentOS 8 and ovirt package following this step:
systemctl enable --now cockpit.socket
yum install https://resources.ovirt.org/pub/yum-repo/ovirt-release44.rpm
yum module -y enable javapackages-tools
yum module -y enable pki-deps
yum module -y enable postgresql:12
yum -y install glibc-locale-source glibc-langpack-en
localedef -v -c -i en_US -f UTF-8 en_US.UTF-8
yum update
yum install ovirt-engine
engine-setup (by keeping all default)
It's possible ovirt-imageio-proxy service is not installed? (service ovirt-imageio-proxy status --> not found, yum install ovirt-imageio-proxy --> not found) I'm not able to upload iso... I also installed CA cert in firefox...
3 years
Mount options
by Tommaso - Shellrent
Hi to all.
there is a way to change the mount options of a running storage
domain with gluster, without set all to maintenance and shoutdown the vm
on it!?
Regards,
--
--
Shellrent - Il primo hosting italiano Security First
*Tommaso De Marchi*
/COO - Chief Operating Officer/
Shellrent Srl
Via dell'Edilizia, 19 - 36100 Vicenza
Tel. 0444321155 <tel:+390444321155> | Fax 04441492177
3 years
Issues deploying 4.4 with HE on new EPYC hosts
by Mark R
Hello all,
I have some EPYC servers that are not yet in production, so I wanted to go ahead and move them off of 4.3 (which was working) to 4.4. I flattened and reinstalled the hosts with CentOS 8.1 Minimal and installed all updates. Some very simple networking, just a bond and two iSCSI interfaces. After adding the oVirt 4.4 repo and installing the requirements, I run 'hosted-engine --deploy' and proceed through the setup. Everything looks as though it is going nicely and the local HE starts and runs perfectly. After copying the HE disks out to storage, the system tries to start it there but is using a different CPU definition and it's impossible to start it. At this point I'm stuck but hoping someone knows the fix, because this is as vanilla a deployment as I could attempt and it appears EPYC CPUs are a no-go right now with 4.4.
When the HostedEngineLocal VM is running, the CPU definition is:
<cpu mode='custom' match='exact' check='full'>
<model fallback='forbid'>EPYC-IBPB</model>
<vendor>AMD</vendor>
<feature policy='require' name='x2apic'/>
<feature policy='require' name='tsc-deadline'/>
<feature policy='require' name='hypervisor'/>
<feature policy='require' name='tsc_adjust'/>
<feature policy='require' name='clwb'/>
<feature policy='require' name='umip'/>
<feature policy='require' name='arch-capabilities'/>
<feature policy='require' name='cmp_legacy'/>
<feature policy='require' name='perfctr_core'/>
<feature policy='require' name='wbnoinvd'/>
<feature policy='require' name='amd-ssbd'/>
<feature policy='require' name='skip-l1dfl-vmentry'/>
<feature policy='disable' name='monitor'/>
<feature policy='disable' name='svm'/>
<feature policy='require' name='topoext'/>
</cpu>
Once the HostedEngine VM is defined and trying to start, the CPU definition is simply:
<cpu mode='custom' match='exact' check='partial'>
<model fallback='allow'>EPYC</model>
<topology sockets='16' cores='4' threads='1'/>
<feature policy='require' name='ibpb'/>
<feature policy='require' name='virt-ssbd'/>
<numa>
<cell id='0' cpus='0-63' memory='16777216' unit='KiB'/>
</numa>
</cpu>
On attempts to start it, the host is logging this error: "CPU is incompatible with host CPU: Host CPU does not provide required features: virt-ssbd".
So, the HostedEngineLocal VM works because it has a requirement set for 'amd-ssbd' instead of 'virt-ssbd', and a VM requiring 'virt-ssbd' can't run on EPYC CPUs with CentOS 8.1. As mentioned, the HostedEngine ran fine on oVirt 4.3 with CentOS 7.8, and on 4.3 the cpu definition also required 'virt-ssbd', so I can only imagine that perhaps this is due to the more recent 4.x kernel that I now need HE to require 'amd-ssbd' instead?
Any clues to help with this? I can completely wipe/reconfigure the hosts as needed so I'm willing to try whatever so that I can move forward with a 4.4 deployment.
Thanks!
Mark
3 years
tun: unexpected GSO type: 0x0, gso_size 1368, hdr_len 66
by lejeczek
hi everyone,
With 4.4 I get:
...
tun: unexpected GSO type
...
It happens to a "third-party" kernel, namely:
5.6.15-1.el8.elrepo.x86_64 on Centos 8.
I wonder if anybody sees the same or similar and I also
wonder if I should report it somewhere in Bugzilla as
"heads-up" towards new kernels?
[Fri May 29 08:00:02 2020] tun: unexpected GSO type: 0x0,
gso_size 1368, hdr_len 66
[Fri May 29 08:00:02 2020] tun: 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 ................
[Fri May 29 08:00:02 2020] tun: 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 ................
[Fri May 29 08:00:02 2020] tun: 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 ................
[Fri May 29 08:00:02 2020] tun: 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 ................
[Fri May 29 08:00:02 2020] ------------[ cut here ]------------
[Fri May 29 08:00:02 2020] WARNING: CPU: 2 PID: 3605 at
drivers/net/tun.c:2123 tun_do_read+0x524/0x6c0 [tun]
[Fri May 29 08:00:02 2020] Modules linked in: sd_mod sg
vhost_net vhost tap xt_CHECKSUM xt_MASQUERADE xt_conntrack
nf_nat_tftp nf_conntrack_tftp tun nft_nat ipt_REJECT bridge
nft_counter nft_objref nf_conntrack_netbios_ns
nf_conntrack_broadcast nft_masq nft_fib_inet nft_fib_ipv4
nft_fib_ipv6 nft_fib nft_reject_inet nf_reject_ipv4
nf_reject_ipv6 nft_reject nft_ct nf_tables_set nft_chain_nat
nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 qedf qed
ip6_tables crc8 nft_compat bnx2fc ip_set cnic uio libfcoe
8021q garp mrp stp llc libfc scsi_transport_fc nf_tables
nfnetlink sunrpc vfat fat ext4 mbcache jbd2
snd_hda_codec_hdmi snd_hda_intel snd_intel_dspcfg
snd_hda_codec snd_hda_core snd_hwdep snd_seq snd_seq_device
edac_mce_amd snd_pcm kvm_amd kvm eeepc_wmi asus_wmi
sp5100_tco sparse_keymap irqbypass rfkill wmi_bmof pcspkr
joydev i2c_piix4 k10temp snd_timer snd soundcore gpio_amdpt
gpio_generic acpi_cpufreq ip_tables xfs libcrc32c dm_crypt
ax88179_178a usbnet mii hid_lenovo nouveau video mxm_wmi
i2c_algo_bit
[Fri May 29 08:00:02 2020] drm_kms_helper syscopyarea
sysfillrect sysimgblt fb_sys_fops crct10dif_pclmul ttm
crc32_pclmul crc32c_intel ahci libahci drm
ghash_clmulni_intel libata nvme ccp r8169 nvme_core realtek
wmi t10_pi pinctrl_amd dm_mirror dm_region_hash dm_log dm_mod
[Fri May 29 08:00:02 2020] CPU: 2 PID: 3605 Comm: vhost-3578
Not tainted 5.6.15-1.el8.elrepo.x86_64 #1
[Fri May 29 08:00:02 2020] Hardware name: System
manufacturer System Product Name/PRIME B450M-A, BIOS 2006
11/13/2019
[Fri May 29 08:00:02 2020] RIP: 0010:tun_do_read+0x524/0x6c0
[tun]
[Fri May 29 08:00:02 2020] Code: 00 6a 01 0f b7 44 24 22 b9
10 00 00 00 48 c7 c6 cb 33 09 c1 48 c7 c7 d1 33 09 c1 83 f8
40 48 0f 4f c2 31 d2 50 e8 4c 14 df c5 <0f> 0b 58 5a 48 c7
c5 ea ff ff ff e9 d2 fc ff ff 4c 89 e2 be 04 00
[Fri May 29 08:00:02 2020] RSP: 0018:ffffaaf301dfbcb8
EFLAGS: 00010292
[Fri May 29 08:00:02 2020] RAX: 0000000000000000 RBX:
ffff88ceae6b4800 RCX: 0000000000000007
[Fri May 29 08:00:02 2020] RDX: 0000000000000000 RSI:
0000000000000096 RDI: ffff88d14e8996b0
[Fri May 29 08:00:02 2020] RBP: 000000000000004e R08:
0000000000000516 R09: 0000000000000055
[Fri May 29 08:00:02 2020] R10: 000000000000072e R11:
ffffaaf301dfba88 R12: ffffaaf301dfbe50
[Fri May 29 08:00:02 2020] R13: ffff88d0e98b8900 R14:
0000000000000000 R15: 0000000000000000
[Fri May 29 08:00:02 2020] FS: 0000000000000000(0000)
GS:ffff88d14e880000(0000) knlGS:0000000000000000
[Fri May 29 08:00:02 2020] CS: 0010 DS: 0000 ES: 0000 CR0:
0000000080050033
[Fri May 29 08:00:02 2020] CR2: 000055f6b3af2bd8 CR3:
00000003c6c84000 CR4: 0000000000340ee0
[Fri May 29 08:00:02 2020] Call Trace:
[Fri May 29 08:00:02 2020] ? __wake_up_common+0x77/0x140
[Fri May 29 08:00:02 2020] tun_recvmsg+0x6b/0xf0 [tun]
[Fri May 29 08:00:02 2020] handle_rx+0x573/0x940 [vhost_net]
[Fri May 29 08:00:02 2020] ? log_used.part.45+0x20/0x20 [vhost]
[Fri May 29 08:00:02 2020] vhost_worker+0xcc/0x140 [vhost]
[Fri May 29 08:00:02 2020] kthread+0x10c/0x130
[Fri May 29 08:00:02 2020] ? kthread_park+0x80/0x80
[Fri May 29 08:00:02 2020] ret_from_fork+0x22/0x40
[Fri May 29 08:00:02 2020] ---[ end trace 9df20668f2e81977 ]---
many thanks, L.
3 years