problem with custom bond options
by Jiří Sléžka
Hello,
CentOS8, oVirt 4.4.1.10-1.el8
I am trying to setup active-backup (mode=1) bonding mode with custom
properties. I have one 10GE switch, the second is just 1G. 10GE link is
the primary one.
cat /etc/sysconfig/network-scripts/ifcfg-bond0
BONDING_OPTS="active_slave=ens5 downdelay=0 miimon=100
mode=active-backup primary=ens5 updelay=0"
TYPE=Bond
BONDING_MASTER=yes
PROXY_METHOD=none
BROWSER_ONLY=no
IPV4_FAILURE_FATAL=no
IPV6_DISABLED=yes
IPV6INIT=no
NAME=bond0
UUID=c054364e-47cf-47ee-a7fc-70b37c9977e7
DEVICE=bond0
ONBOOT=yes
MTU=9000
When I try to add a custom parameter "fail_over_mac=active" (which I
believe could solve my problems with stalled mac addresses in switch's
cam table in case of failover) I got...
"Error while executing action HostSetupNetworks: Unexpected exception"
...in manager. In the engine.log it looks like
2020-07-22 21:20:35,774+02 WARN
[org.ovirt.engine.core.vdsbroker.vdsbroker.HostSetupNetworksVDSCommand]
(default task-8) [da1984f3-f38b-4e0a-ac80-a81e67d73ff0] Unexpected
return value: Status [code=-32603, message=Internal JSON-RPC error:
{'reason': 'MAC address cannot be specified in bond interface along with
specified bond options'}]
2020-07-22 21:20:35,774+02 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.HostSetupNetworksVDSCommand]
(default task-8) [da1984f3-f38b-4e0a-ac80-a81e67d73ff0] Failed in
'HostSetupNetworksVDS' method
2020-07-22 21:20:35,774+02 WARN
[org.ovirt.engine.core.vdsbroker.vdsbroker.HostSetupNetworksVDSCommand]
(default task-8) [da1984f3-f38b-4e0a-ac80-a81e67d73ff0] Unexpected
return value: Status [code=-32603, message=Internal JSON-RPC error:
{'reason': 'MAC address cannot be specified in bond interface along with
specified bond options'}]
2020-07-22 21:20:35,811+02 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-8) [da1984f3-f38b-4e0a-ac80-a81e67d73ff0] EVENT_ID:
VDS_BROKER_COMMAND_FAILURE(10,802), VDSM ovirt-hci01.mch.local command
HostSetupNetworksVDS failed: Internal JSON-RPC error: {'reason': 'MAC
address cannot be specified in bond interface along with specified bond
options'}
Could anybody explain me what 'MAC address cannot be specified in bond
interface along with specified bond options' means? I believe a MAC
address is not configured in interface configuration.
Or does it mean 'fail_over_mac=active' is not supported in oVirt?
Thanks in advance,
Jiri
4 years, 4 months
Re: Ovirt SYSTEM user does not allow deletion of VM networks
by Konstantinos B
I've deleted them from the cluster as well but re-appear.
I've removed ovirt-engine through engine-cleanup installed again and they reappear.
Removed ovirt-engine deleted all "ovirt*" files and currenty trying to re-install.
I've checked the host's ovs-vsctl bridges and only the desired are shown.
So i believe it's an issue on the engine itself.
4 years, 5 months
Shutdown procedure for single host HCI Gluster
by Gianluca Cecchi
Hello,
I'm testing the single node HCI with ovirt-node-ng 4.3.9 iso.
Very nice and many improvements over the last time I tried it. Good!
I have a doubt related to shutdown procedure of the server.
Here below my steps:
- Shutdown all VMs (except engine)
- Put into maintenance data and vmstore domains
- Enable Global HA Maintenance
- Shutdown engine
- Shutdown hypervisor
It seems that the last step doesn't end and I had to brutally power off the
hypervisor.
Here the screenshot regarding infinite failure in unmounting
/gluster_bricks/engine
https://drive.google.com/file/d/1ee0HG21XmYVA0t7LYo5hcFx1iLxZdZ-E/view?us...
What would be the right step to do before the final shutdown of hypervisor?
Thanks,
Gianluca
4 years, 5 months
RDP
by eevans@digitaldatatechs.com
I am using Ovirt 4.3. I followed the instructions for get RDP to work for a user, but admin@internal is the only user the RDP will launch for. The other test users I created and are added to the Remote Desktop Users group which is the permission I assigned to the VM's as well as the user name. I added users first, then the group to see if it would help.
So far, admin@internal, the Ovirt admin user, is the only user RDP will work for.
I know I am missing something, I'm just not sure what.
Any help would be appreciated.
Eric.
4 years, 5 months
very very bad iscsi performance
by Philip Brown
I'm trying to get optimal iscsi performance. We're a heavy iscsi shop, with 10g net.
I'mm experimenting with SSDs, and the performance in ovirt is way, way less than I would have hoped.
More than an order of magnitude slower.
here's a datapoint.
Im running filebench, with the OLTP workload.
First, i run it on one of the hosts, that has an SSD directly attached.
create an xfs filesystem (created on a vg "device" on top of the SSD), mount it with noatime, and run the benchmark.
37166: 74.084: IO Summary: 3746362 ops, 62421.629 ops/s, (31053/31049 r/w), 123.6mb/s, 161us cpu/op, 1.1ms latency
I then unmount it, and make the exact same device an iscsi target, and create a storage domain with it.
I then create a disk for a VM running *on the same host*, and run the benchmark.
The same thing: filebench, oltp workload, xfs filesystem, noatime.
13329: 91.728: IO Summary: 153548 ops, 2520.561 ops/s, (1265/1243 r/w), 4.9mb/s, 289us cpu/op, 88.4ms latency
62,000 ops/s vs 2500 ops/s.
what????
Someone might be tempted to say, "try making the device directly available, AS a device, to the VM".
Unfortunately,this is not an option.
My goal is specifically to put together a new, high performing storage domain, that I can use as database devices in VMs.
I'm not expecting the same 62,000 ops/second.
but I was expecting at *least* 5,000. Ideally more like 10,000.
--
Philip Brown| Sr. Linux System Administrator | Medata, Inc.
5 Peters Canyon Rd Suite 250
Irvine CA 92606
Office 714.918.1310| Fax 714.918.1325
pbrown(a)medata.com| www.medata.com
4 years, 5 months
qemu-guest-agent on Ubuntu doesn't report FQDN
by Florian Schmid
Hi,
I have a problem with Ubuntu 20.04 VM reporting the correct FQDN to the engine.
Starting with this release, the ovirt-guest-agent is not available anymore.
Therefore, I have installed qemu-geust-agent with package defaults.
Now in the Engine, I only see the hostname under FQDN tab, instead the real full name with domain.
I'm running an oVirt environment on 4.3.8.
The VM is resolveable, forward and reverse DNS entries are working.
hostname -f shows the correct FQDN.
Even adding IP and FQDN to /etc/hosts file doesn't change anything.
qemu-guest-agent version: 4.2-3ubuntu6.3
I manage this VM via ansible 2.9 and ansible is able to get the FQDN of the VM without any issues...
What can I do here to debug my issue?
Does the engine cache the wrong result? Even after stopping and starting the VM again, engine is only showing the hostname instead of the FQDN.
Best regards,
Florian
4 years, 5 months
Problem with paused VMs in ovirt 4.3.10.
by Damien
Hi!
We have a problem with paused VMs in ovirt cluster. Please, help to
solve this.
In ovirt manager massege "VM rtb-stagedsw02-ovh has been paused."
Resume fails with error "Failed to resume VM rtb-stagedsw02-ovh (Host:
ovirt-node09-ovh.local, User: admin@internal-authz)."
In oVirt Cluster 38 VM, paused VM only ubuntu 20.04 focal with docker swarm.
Archived logs in attach.
Packeges on ovirt nodes:
python2-ovirt-setup-lib-1.2.0-1.el7.noarch
ovirt-vmconsole-1.0.7-2.el7.noarch
ovirt-provider-ovn-driver-1.2.29-1.el7.noarch
ovirt-vmconsole-host-1.0.7-2.el7.noarch
python2-ovirt-host-deploy-1.8.5-1.el7.noarch
ovirt-imageio-common-1.5.3-0.el7.x86_64
cockpit-machines-ovirt-195.6-1.el7.centos.noarch
ovirt-ansible-engine-setup-1.1.9-1.el7.noarch
ovirt-host-dependencies-4.3.5-1.el7.x86_64
ovirt-host-4.3.5-1.el7.x86_64
python-ovirt-engine-sdk4-4.3.4-2.el7.x86_64
ovirt-host-deploy-common-1.8.5-1.el7.noarch
ovirt-ansible-hosted-engine-setup-1.0.32-1.el7.noarch
ovirt-hosted-engine-setup-2.3.13-1.el7.noarch
ovirt-ansible-repositories-1.1.5-1.el7.noarch
ovirt-imageio-daemon-1.5.3-0.el7.noarch
cockpit-ovirt-dashboard-0.13.10-1.el7.noarch
ovirt-release43-4.3.10-1.el7.noarch
ovirt-hosted-engine-ha-2.3.6-1.el7.noarch
Packeges on HostedEngine:
ovirt-ansible-infra-1.1.13-1.el7.noarch
ovirt-vmconsole-1.0.7-2.el7.noarch
ovirt-engine-setup-plugin-websocket-proxy-4.3.10.4-1.el7.noarch
ovirt-engine-websocket-proxy-4.3.10.4-1.el7.noarch
ovirt-engine-restapi-4.3.10.4-1.el7.noarch
ovirt-ansible-engine-setup-1.1.9-1.el7.noarch
ovirt-ansible-shutdown-env-1.0.3-1.el7.noarch
ovirt-iso-uploader-4.3.2-1.el7.noarch
ovirt-provider-ovn-1.2.29-1.el7.noarch
ovirt-imageio-proxy-setup-1.5.3-0.el7.noarch
ovirt-engine-extension-aaa-ldap-setup-1.3.10-1.el7.noarch
ovirt-engine-setup-plugin-vmconsole-proxy-helper-4.3.10.4-1.el7.noarch
python-ovirt-engine-sdk4-4.3.4-2.el7.x86_64
python2-ovirt-host-deploy-1.8.5-1.el7.noarch
ovirt-ansible-vm-infra-1.1.22-1.el7.noarch
ovirt-engine-metrics-1.3.7-1.el7.noarch
ovirt-ansible-disaster-recovery-1.2.0-1.el7.noarch
ovirt-engine-wildfly-overlay-17.0.1-1.el7.noarch
ovirt-ansible-roles-1.1.7-1.el7.noarch
ovirt-engine-dwh-setup-4.3.8-1.el7.noarch
python2-ovirt-engine-lib-4.3.10.4-1.el7.noarch
ovirt-engine-extension-aaa-ldap-1.3.10-1.el7.noarch
ovirt-engine-setup-plugin-ovirt-engine-4.3.10.4-1.el7.noarch
ovirt-engine-vmconsole-proxy-helper-4.3.10.4-1.el7.noarch
ovirt-engine-tools-backup-4.3.10.4-1.el7.noarch
ovirt-engine-webadmin-portal-4.3.10.4-1.el7.noarch
ovirt-host-deploy-common-1.8.5-1.el7.noarch
ovirt-ansible-image-template-1.1.12-1.el7.noarch
ovirt-ansible-manageiq-1.1.14-1.el7.noarch
ovirt-engine-wildfly-17.0.1-1.el7.x86_64
ovirt-ansible-hosted-engine-setup-1.0.32-1.el7.noarch
ovirt-imageio-common-1.5.3-0.el7.x86_64
ovirt-imageio-proxy-1.5.3-0.el7.noarch
python2-ovirt-setup-lib-1.2.0-1.el7.noarch
ovirt-vmconsole-proxy-1.0.7-2.el7.noarch
ovirt-engine-setup-base-4.3.10.4-1.el7.noarch
ovirt-engine-setup-plugin-cinderlib-4.3.10.4-1.el7.noarch
ovirt-engine-extensions-api-impl-4.3.10.4-1.el7.noarch
ovirt-release43-4.3.10-1.el7.noarch
ovirt-engine-backend-4.3.10.4-1.el7.noarch
ovirt-engine-tools-4.3.10.4-1.el7.noarch
ovirt-web-ui-1.6.0-1.el7.noarch
ovirt-ansible-cluster-upgrade-1.1.14-1.el7.noarch
ovirt-cockpit-sso-0.1.1-1.el7.noarch
ovirt-engine-ui-extensions-1.0.10-1.el7.noarch
ovirt-engine-setup-plugin-ovirt-engine-common-4.3.10.4-1.el7.noarch
ovirt-engine-4.3.10.4-1.el7.noarch
ovirt-ansible-repositories-1.1.5-1.el7.noarch
ovirt-engine-extension-aaa-jdbc-1.1.10-1.el7.noarch
ovirt-host-deploy-java-1.8.5-1.el7.noarch
ovirt-engine-dwh-4.3.8-1.el7.noarch
ovirt-engine-api-explorer-0.0.5-1.el7.noarch
ovirt-guest-agent-common-1.0.16-1.el7.noarch
ovirt-engine-setup-4.3.10.4-1.el7.noarch
ovirt-engine-dbscripts-4.3.10.4-1.el7.noarch
In /var/log/ovirt-engine/engine.log:
2020-07-24 09:38:44,472+03 INFO
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
(EE-ManagedThreadFactory-engineScheduled-Thread-91) [] VM
'18f6bb79-ba9b-4a0e-bcb2-b4ef4904ef99'(rtb-stagedsw02-ovh) move
d from 'Up' --> 'Paused'
2020-07-24 09:38:44,493+03 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(EE-ManagedThreadFactory-engineScheduled-Thread-91) [] EVENT_ID:
VM_PAUSED(1,025), VM rtb-stagedsw02-ovh has been paused.
In /var/log/vdsm/vdsm.log
2020-07-24 09:38:42,771+0300 INFO (libvirt/events) [virt.vm]
(vmId='18f6bb79-ba9b-4a0e-bcb2-b4ef4904ef99') CPU stopped: onSuspend
(vm:6100)
2020-07-24 09:38:44,328+0300 INFO (jsonrpc/1) [api.host] FINISH
getAllVmIoTunePolicies return={'status': {'message': 'Done', 'code':
0}, 'io_tune_policies_dict':
{'4d9519f6-1ab9-4032-8fdf-4c6118531544': {'poli
cy': [], 'current_values': [{'ioTune': {'write_bytes_sec': 0L,
'total_iops_sec': 0L, 'read_iops_sec': 0L, 'read_bytes_sec': 0L,
'write_iops_sec': 0L, 'total_bytes_sec': 0L}, 'path':
'/rhev/data-center/mnt/glust
erSD/10.0.11.107:_vmstore02/16c5070c-cc5f-4595-965f-66838c7c17a5/images/e1cfb9ec-39d8-416d-9f5f-0b54765301d4/8f95d60d-931b-4764-993c-ba9373efe361',
'name': 'sda'}]}, 'b031a269-6bcd-40b7-9737-e47112a54b3a': {'po
licy': [], 'current_values': [{'ioTune': {'write_bytes_sec': 0L,
'total_iops_sec': 0L, 'read_iops_sec': 0L, 'read_bytes_sec': 0L,
'write_iops_sec': 0L, 'total_bytes_sec': 0L}, 'path':
'/rhev/data-center/mnt/glu
sterSD/10.0.11.101:_vmstore01/5e05fed3-448b-4f86-b5ba-004982194c90/images/9c3cc7a0-254e-4756-91b6-fb54e21abf38/71dd8024-8aec-46da-a80f-34260655e929',
'name': 'sda'}, {'ioTune': {'write_bytes_sec': 0L, 'total_io
ps_sec': 0L, 'read_iops_sec': 0L, 'read_bytes_sec': 0L,
'write_iops_sec': 0L, 'total_bytes_sec': 0L}, 'path':
'/rhev/data-center/mnt/glusterSD/10.0.11.101:_vmstore01/5e05fed3-448b-4f86-b5ba-004982194c90/images/
3e3a5064-5fe1-40c0-81f5-44f1a3a4d503/13549972-82de-4746-aeea-3e1531f9c180',
'name': 'sdb'}]}, 'b5fad17c-fa9d-4a80-99e7-6f86e6e19c9b': {'policy':
[], 'current_values': [{'ioTune': {'write_bytes_sec': 0L, 'total_
iops_sec': 0L, 'read_iops_sec': 0L, 'read_bytes_sec': 0L,
'write_iops_sec': 0L, 'total_bytes_sec': 0L}, 'path':
'/rhev/data-center/mnt/glusterSD/10.0.11.107:_vmstore02/16c5070c-cc5f-4595-965f-66838c7c17a5/image
s/15ce6cb0-6f06-4a31-92d8-b6e1bcabf3bc/613de344-d1ad-49aa-a2d0-d60ca9eb7cd3',
'name': 'sda'}]}, '18f6bb79-ba9b-4a0e-bcb2-b4ef4904ef99': {'policy':
[], 'current_values': [{'ioTune': {'write_bytes_sec': 0L, 'tota
l_iops_sec': 0L, 'read_iops_sec': 0L, 'read_bytes_sec': 0L,
'write_iops_sec': 0L, 'total_bytes_sec': 0L}, 'path':
u'/rhev/data-center/mnt/glusterSD/10.0.11.107:_vmstore02/16c5070c-cc5f-4595-965f-66838c7c17a5/im
ages/7978e2db-c560-4315-a775-223f1b13ae31/d927eea8-e588-449e-b07b-c845d15b082e',
'name': 'sda'}, {'ioTune': {'write_bytes_sec': 0L, 'total_iops_sec':
0L, 'read_iops_sec': 0L, 'read_bytes_sec': 0L, 'write_iops_s
ec': 0L, 'total_bytes_sec': 0L}, 'path':
u'/rhev/data-center/mnt/glusterSD/10.0.11.107:_vmstore02/16c5070c-cc5f-4595-965f-66838c7c17a5/images/b925dc2e-17ba-470d-a9be-cb96d4ef1f0d/951d9712-7160-4f88-a838-970aec8
2b3ea', 'name': 'sdb'}]}}} from=::1,34598 (api:54)
2020-07-24 09:38:49,747+0300 WARN (qgapoller/1)
[virt.periodic.VmDispatcher] could not run <function <lambda> at
0x7fe5c84de6e0> on ['18f6bb79-ba9b-4a0e-bcb2-b4ef4904ef99']
(periodic:289)
In /var/log/libvirt/qemu/rtb-stagedsw03-ovh.log
KVM: entry failed, hardware error 0x80000021
If you're running a guest on an Intel machine without unrestricted mode
support, the failure can be most likely due to the guest entering an
invalid
state for Intel VT. For example, the guest maybe running in big real
mode
which is not supported on less recent Intel processors.
EAX=00001000 EBX=43117da8 ECX=0000000c EDX=00000121
ESI=00000003 EDI=17921000 EBP=43117cb0 ESP=43117c98
EIP=00008000 EFL=00000002 [-------] CPL=0 II=0 A20=1 SMM=1 HLT=0
ES =0000 00000000 ffffffff 00809300
CS =9b00 7ff9b000 ffffffff 00809300
SS =0000 00000000 ffffffff 00809300
DS =0000 00000000 ffffffff 00809300
FS =0000 00000000 ffffffff 00809300
GS =0000 00000000 ffffffff 00809300
LDT=0000 00000000 000fffff 00000000
TR =0040 001ce000 0000206f 00008b00
GDT= 001cc000 0000007f
IDT= 00000000 00000000
CR0=00050032 CR2=17921000 CR3=2b92a003 CR4=00000000
DR0=0000000000000000 DR1=0000000000000000 DR2=0000000000000000
DR3=0000000000000000
DR6=00000000fffe0ff0 DR7=0000000000000400
EFER=0000000000000000
Code=ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
<ff> ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
ff ff ff ff ff ff ff ff
4 years, 5 months
iSCSI multipath with separate subnets... still not possible in 4.4.x?
by Mark R
I'm looking through quite a few bug reports and mailing list threads, but want to make sure I'm not missing some recent development. It appears that doing iSCSI with two separate, non-routed subnets is still not possible with 4.4.x. I have the dead-standard iSCSI setup with two separate switches, separate interfaces on hosts and storage, and separate subnets that have no gateway and are completely unreachable except from directly attached interfaces.
The hosted-engine comes up with multiple paths and everything is perfect, but that's because Ansible/hosted-engine deploy script have configured things correctly. Once you need to import or add new storage domains, it's not possible to do so in a way that gets both paths connected *and* persists across host reboots. Following the docs to create 'iSCSI Multipath" bonds (plural, you can _not_ create a single bond that includes both interfaces and hope things route correctly... oVirt will try to connect from the interface for storage network A to the target on storage network B, which can't happen since they are not routed (and should not be). So, there's nothing in the docs about how you can accomplish multipathing, but there are a few mailing list messages that say "just create two separate "iSCSI Multipath" bonds in the datacenter, one for each of your two interfaces. You can do this, and you'll get hopeful that things might work now. You can do discovery and it succeeds, because no
more trying to connect to unreachable targets. However, and big caveat, there's no way to tell this new/imported domain, "Oh, use this other interface as well, so you have redundant paths". Once the domain is attached and activated, you have a single path. You can then manage the domain, do a discovery, see a path that isn't connected yet, and log into it as well. Now you have two paths, is everything right with the world?!? Nope, it's impossible to persist that connection, it will be gone on next reboot and you'll always have to manually visit each host, do discovery, and login. Nothing in the UI allows you to "Save" that second connection in a way that it will be used again. Clicking "OK" does not, and going back to the "iSCSI Multipath" area of the Data Center you can't edit each of the bonds and make sure each logical network has every possible target checked, because these targets you've manually logged into are never listed in that area of the UI.
So I really, really hope I'm wrong because I'd like to move past this snag and onto the next one (which is that bond interfaces in 4.4.x will not allow you to attach additional networks... works great in 4.3, appears broken in 4.4.x). But, no sense chasing that yet if iSCSI multipath isn't possible, which is looking likely.
Has anyone had success, running iSCSI in by far the most common setup out there, but also in a way oVirt really doesn't want to let you? This is driving me nuts, I've paved and rebuilt these hosts dozens of times now, trying different methods in the hopes of getting multipath that persists.
4 years, 5 months
Hosted Engine 4.4.1
by Vijay Sachdeva
Hello Everyone,
Waiting for host to be up task is stuck for hours and when checked engine log found this below:
2020-07-22 16:50:35,717+02 ERROR [org.ovirt.engine.core.sso.utils.SsoUtils] (default task-1) [] OAuthException access_denied: Cannot authenticate user 'None@N/A': No valid profile found in credentials..
Has anyone faced such issue then please help me out..!!
Thanks
Vijay Sachdeva
4 years, 5 months
Re: oVirt 4.3 ssh passwordless setup guide
by Strahil Nikolov
You need to keep the ssh root access from the engine , so you will need a 'Match' stanza for the engine.
Of course testing is very important, but in case you got no test setup - you can set a node in maintenance and experiment a little bit.
Best Regards,
Strahil Nikolov
На 23 юли 2020 г. 21:29:06 GMT+03:00, "Morris, Roy" <roy.morris(a)ventura.org> написа:
>Hello,
>
>Does anyone have a guide or how to on setting up oVirt with
>passwordless ssh setup? I want to do this with a production environment
>to improve security but I have never done this before and want to build
>a test environment to try it out.
>
>Best regards,
>Roy Morris
4 years, 5 months