Unable to create a network bond in OLVM
by jdmr4815@gmail.com
Good morning all,
I'm rather new to this so please forgive my ignorance. I've searched official Oracle documentation, the oVirt administration guide, reddit, LLMs, and about every other source I could find but haven't seen anything that allows me set up a bond successfully.
For background, I'm currently using one server as management and another server as a local storage host. I'm not smart enough yet to do network storage.
I've tried to create a bond in two ways: the recommended way (in the administration guide, Oracle documentation, etc.) of simply creating a bond within OLVM by dragging/dropping interfaces to form a bond. Also, I've created a bond manually on my management server and verified it was present in the hosts -> network interfaces screen. When I attempt to assign a logical network to the bond, I get some variation of the following in /var/log/vdsm/vdsm.log:
from=::ffff:10.90.104.68,37286, flow_id=6d139ffe-d9dc-4993-b050-72baa9eca527 (api:35)
2025-02-10 09:53:11,257-0600 ERROR (jsonrpc/1) [jsonrpc.JsonRpcServer] Internal server error (__init__:343)
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/yajsonrpc/__init__.py", line 338, in _handle_request
res = method(**params)
File "/usr/lib/python3.6/site-packages/vdsm/rpc/Bridge.py", line 186, in _dynamicMethod
result = fn(*methodArgs)
File "<decorator-gen-508>", line 2, in setupNetworks
File "/usr/lib/python3.6/site-packages/vdsm/common/api.py", line 33, in method
ret = func(*args, **kwargs)
File "/usr/lib/python3.6/site-packages/vdsm/API.py", line 1576, in setupNetworks
supervdsm.getProxy().setupNetworks(networks, bondings, options)
File "/usr/lib/python3.6/site-packages/vdsm/common/supervdsm.py", line 38, in __call__
return callMethod()
File "/usr/lib/python3.6/site-packages/vdsm/common/supervdsm.py", line 36, in <lambda>
**kwargs)
File "<string>", line 2, in setupNetworks
File "/usr/lib64/python3.6/multiprocessing/managers.py", line 772, in _callmethod
raise convert_to_error(kind, result)
libnmstate.error.NmstateVerificationError:
desired
=======
---
name: bond0
type: bond
state: up
ipv4:
enabled: false
ipv6:
enabled: false
link-aggregation:
mode: 802.3ad
options:
xmit_hash_policy: layer2+3
port:
- eno5
- eno6
current
=======
---
name: bond0
type: bond
state: up
ipv4:
enabled: false
ipv6:
enabled: false
lldp:
enabled: false
difference
==========
--- desired
+++ current
@@ -6,10 +6,5 @@
enabled: false
ipv6:
enabled: false
-link-aggregation:
- mode: 802.3ad
- options:
- xmit_hash_policy: layer2+3
- port:
- - eno5
- - eno6
+lldp:
+ enabled: false
After some research, it seemed like the solution was to take the "desired" state and use nmstatectl apply filename.yaml to apply the desired state. However, doing so produces the following in the same log:
2025-02-10 10:04:51,577 root DEBUG Interface ethernet.eno7 found. Merging the interface information.
2025-02-10 10:04:51,577 root DEBUG Interface ethernet.eno8 found. Merging the interface information.
2025-02-10 10:04:51,605 root DEBUG Async action: Rollback to checkpoint /org/freedesktop/NetworkManager/Checkpoint/31 started
2025-02-10 10:04:51,607 root DEBUG Checkpoint /org/freedesktop/NetworkManager/Checkpoint/31 rollback executed
2025-02-10 10:04:51,607 root DEBUG Interface eno3 rollback succeeded
2025-02-10 10:04:51,607 root DEBUG Interface lo rollback succeeded
2025-02-10 10:04:51,607 root DEBUG Interface eno2 rollback succeeded
2025-02-10 10:04:51,607 root DEBUG Interface eno8 rollback succeeded
2025-02-10 10:04:51,607 root DEBUG Interface eno6 rollback succeeded
2025-02-10 10:04:51,607 root DEBUG Interface eno5 rollback succeeded
2025-02-10 10:04:51,607 root DEBUG Interface eno7 rollback succeeded
2025-02-10 10:04:51,607 root DEBUG Interface eno4 rollback succeeded
2025-02-10 10:04:51,607 root DEBUG Interface eno1 rollback succeeded
2025-02-10 10:04:51,608 root DEBUG Interface PLZ-NET rollback succeeded
2025-02-10 10:04:51,608 root DEBUG Async action: Rollback to checkpoint /org/freedesktop/NetworkManager/Checkpoint/31 finished
Traceback (most recent call last):
File "/usr/bin/nmstatectl", line 11, in <module>
load_entry_point('nmstate==1.4.6', 'console_scripts', 'nmstatectl')()
File "/usr/lib/python3.6/site-packages/nmstatectl/nmstatectl.py", line 74, in main
return args.func(args)
File "/usr/lib/python3.6/site-packages/nmstatectl/nmstatectl.py", line 355, in apply
args.save_to_disk,
File "/usr/lib/python3.6/site-packages/nmstatectl/nmstatectl.py", line 419, in apply_state
save_to_disk=save_to_disk,
File "/usr/lib/python3.6/site-packages/libnmstate/netapplier.py", line 140, in apply
plugins, net_state, verify_change, save_to_disk, verify_retry
File "/usr/lib/python3.6/site-packages/libnmstate/netapplier.py", line 190, in _apply_ifaces_state
_verify_change(plugins, net_state)
File "/usr/lib/python3.6/site-packages/libnmstate/netapplier.py", line 197, in _verify_change
net_state.verify(current_state)
File "/usr/lib/python3.6/site-packages/libnmstate/net_state.py", line 126, in verify
self._verify_other_global_info(current_state)
File "/usr/lib/python3.6/site-packages/libnmstate/net_state.py", line 136, in _verify_other_global_info
{key: cur_value},
libnmstate.error.NmstateVerificationError:
desired
=======
---
name: PLZ-NET
current
=======
---
name: null
difference
==========
--- desired
+++ current
@@ -1,2 +1,2 @@
---
-name: PLZ-NET
+name: null
I have no issues using a single interface. I drop the logical network (PLZ-NET) onto the interface (e.g. eno2) and everything works swimmingly but bonding just won't work.
If anyone has guidance or can point me to a solution, I would greatly appreciate it. I've been banging my head against the wall for a week.
2 months
Hosted Engine setup issues.
by daznis@gmail.com
Hello,
I can for figure out what is happening with deployment of my hosted engine setup into a glusterfs cluster. For some unknown reason ansible is getting the IP address of the virbr0 interface which is the default interface created with libvirt installation. Which is down as it's not connected to anything.
The error message I'm getting is [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "The resolved address doesn't resolve on the selected interface\n"},
But after digging deeper I found that ansible is getting a wrong "he_host_ip" value log snippet: 2025-02-10 15:39:13,484+0200 DEBUG var changed: host "localhost" var "he_host_ip" type "<class 'ansible.utils.unsafe_proxy.AnsibleUnsafeText'>" value: ""192.168.124.1""
How should I proceed to solve this? I'm using bonded interface with a vlan for management network bond0.536.
2 months
Gettting VM permissions from ovirt.ovirt ansible colllection
by Colin Coe
Hi all
I'm writing an Ansible playbook to capture VM info, specifically the MAC
address and the user permissions, but I'm having issues resolving the user
ID returned by ovirt.ovirt.ovirt_vm_info into something I can use, like
their userid. ovirt.ovirt.ovirt_vm_info gives me:
"vm_facts.ovirt_vms[0].permissions[0]": {
"href":
"/ovirt-engine/api/vms/39e07b2a-4cbe-48d8-9e03-3a903027c2b0/permissions/00000003-0003-0003-0003-0000000002ad",
"id": "00000003-0003-0003-0003-0000000002ad",
"role": {
"href":
"/ovirt-engine/api/roles/00000000-0000-0000-0000-000000000001",
"id": "00000000-0000-0000-0000-000000000001"
},
"user": {
"href":
"/ovirt-engine/api/users/fdfc627c-d875-11e0-90f0-83df133b58cc",
"id": "fdfc627c-d875-11e0-90f0-83df133b58cc"
},
"vm": {
"href":
"/ovirt-engine/api/vms/39e07b2a-4cbe-48d8-9e03-3a903027c2b0",
"id": "39e07b2a-4cbe-48d8-9e03-3a903027c2b0"
}
}
I'm looking to get the userid's of the users assigned to this VM with the
"UserRole" role.
Any ideas how I can do this with the ovirt.ovirt ansible collection?
Thanks
2 months
Query on Cluster CPU type in OLVM
by dushyantk.sun@gmail.com
1. Does selecting particular "Cluster CPU Type:" while creating cluster has any performance impact on vms.
2. I have Dell and HP servers, while creating cluster if i select intel broadwell or cascadelake cpu type, it shows "Host CPU Type Is Not Compatible With Cluster Properties"
3. Since Intel Nehalem is lowest CPU family version and supporting for old dell/HP, i selected it for my cluster. HOwever vm shows CPU model name "Intel Core i7 9xx (Nehalem Class Core i7)" however same host in vmware shows " Intel(R) Xeon(R) Gold 6252 CPU @ 2.10GHz"
How we can fix this CPu model name.
2 months
Q: Master storage domain on failed hardware
by Andrei Verovski
Hi,
Is it possible to to activate data center if master storage domain is not available anymore?
For example when server hosting master storage domain dies.
Thanks and with best regards
Andrei
2 months, 1 week
export very slow
by Enrico Becchetti
Dear all,
I have an Ovirt cluster with 3 Dell R7525 nodes and about 70 virtual
machines. I have been using ovirtbackup (python) for a long time to save
the VMs.
Unfortunately it has always been a very slow process that with time and
the ever-increasing size of the VMs has become unusable.
The oVirt cluster has two storage domains. "DATA" is a the fibre channel
type (8Gbs) with 18TB and an EXPORT domain NFS managed by a
dual-processor server HPE Proliant with 4x1Gbs.
I'll give you an example. I have a vm with four vdisk for a 1.8TB and
these are the steps performed to backup this VM.
As you will see it takes many hours to get the clone from the snapshot
and then many hours to copy the clone to the NFS storage.
How can I reduce the time needed to save ?
Thank you !
Best Regards
Enrico
-----------------------------------------------------------------
Feb 2, 2025, 6:26:47 AM
Snapshot 'Snapshot for backup script' creation for VM 'GRAYLOG-9' was
initiated by admin@internal-authz.
45
admin@internal-authz
infn-vm10.management
GRAYLOG-9
Blank
INFNPG
DELL
70bbb81b-ed44-4700-839a-96c1df8daeea
oVirt
-----------------------------------------------------------------
Feb 2, 2025, 6:27:22 AM
Snapshot 'Snapshot for backup script' creation for VM 'GRAYLOG-9' has
been completed.
68
admin@internal-authz
infn-vm10.management
GRAYLOG-9
Blank
INFNPG
DELL
70bbb81b-ed44-4700-839a-96c1df8daeea
oVirt
-----------------------------------------------------------------
Feb 2, 2025, 6:27:37 AM
VM GRAYLOG-9_BCK_020244 creation was initiated by admin@internal-authz.
37
admin@internal-authz
GRAYLOG-9_BCK_020244
Blank
INFNPG
DATA_FC_P2050
DELL
edde40d9-2a26-4d88-8a02-bd928c1ded4a
oVirt
-----------------------------------------------------------------
Feb 2, 2025, 1:59:30 PM
VM GRAYLOG-9_BCK_020244 creation has been completed.
53
admin@internal-authz
GRAYLOG-9_BCK_020244
Blank
INFNPG
DATA_FC_P2050
DELL
edde40d9-2a26-4d88-8a02-bd928c1ded4a
oVirt
-----------------------------------------------------------------
Feb 2, 2025, 1:59:34 PM
Snapshot 'Snapshot for backup script' deletion for VM 'GRAYLOG-9' was
initiated by admin@internal-authz.
342
admin@internal-authz
infn-vm10.management
GRAYLOG-9
Blank
INFNPG
DELL
fcfc6291-a55f-431a-8b4c-87cb8e3615ff
oVirt
-----------------------------------------------------------------
Feb 2, 2025, 2:04:00 PM
Snapshot 'Snapshot for backup script' deletion for VM 'GRAYLOG-9' has
been completed.
356
admin@internal-authz
infn-vm10.management
GRAYLOG-9
Blank
INFNPG
DELL
fcfc6291-a55f-431a-8b4c-87cb8e3615ff
oVirt
-----------------------------------------------------------------
Feb 2, 2025, 2:04:08 PM
Starting export Vm GRAYLOG-9_BCK_020244 to VMS_EXPORT
1162
admin@internal-authz
GRAYLOG-9_BCK_020244
INFNPG
VMS_EXPORT
DELL
b964e3b3-df13-4b82-8128-00984eea380b
oVirt
-----------------------------------------------------------------
Feb 4, 2025, 5:27:50 AM
Vm GRAYLOG-9_BCK_020244 was exported successfully to VMS_EXPORT
1150
admin@internal-authz
GRAYLOG-9_BCK_020244
INFNPG
VMS_EXPORT
DELL
b964e3b3-df13-4b82-8128-00984eea380b
oVirt
-----------------------------------------------------------------
Feb 4, 2025, 5:27:53 AM
VM GRAYLOG-9_BCK_020244 configuration was updated by admin@internal-authz.
35
admin@internal-authz
GRAYLOG-9_BCK_020244
Blank
INFNPG
DELL
44480c68-10ad-4660-bd9b-297043210503
oVirt
-----------------------------------------------------------------
Feb 4, 2025, 5:28:00 AM
VM GRAYLOG-9_BCK_020244 was successfully removed by admin@internal-authz.
113
admin@internal-authz
GRAYLOG-9_BCK_020244
Blank
INFNPG
DELL
d141ad60-6cf8-4e77-b438-26b5481f4256
oVirt
-----------------------------------------------------------------
2 months, 1 week
vms shutdown during boot
by eevans@digitaldatatechs.com
I have an issue with getting vms to run. I get this error: VM is down with error. Exit message: Lost connection with qemu process.
2025-02-02 13:27:55,131-05 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-13) [5ad46633] EVENT_ID: VM_DOWN_ERROR(119), VM DBServer-2-18-2024 is down with error. Exit message: Lost connection with qemu process.
I see this in the RedHat portal but I don't have entitement or I wouldn't be here.
My setup Centos 9 with Ovirt 4.5 with a seperate node controlling the gluster server and ovirt and three nodes managed by the fourth. I ran this same setup in the past with no problem.
I hope one of the RH folks will post the fix. It is a verified issue on the REDHAT portal
I appreciate any help yu can give me.
2 months, 1 week