VDSM Command DetachStorageDomainVDS Failed
by Matthew J Black
Hi All,
So after having an oVirt cluster crash (no data loss (at this stage), thankfully) and rebuilding from scratch, I'm trying to import the old Storage Domains. I've been successful with three (automatically "detaching" from the old, now non-existent hosts) but one is giving me the following error from the hosted-engine GUI:
~~~
VDSM command DetachStorageDomainVDS failed: Cannot acquire host id: ('e311ddf1-7f2c-49ef-a618-050d9a2b947f', SanlockException(19, 'Sanlock lockspace add failure', 'No such device'))
~~~
I am going to assume ("make an ass out for you and me") that I can run the required command from the cli with a "-force" flag (I hope), so my Q is: What is the command, because I can't seem to find it in any doco (most probably because I'm old and my eyes are feeble :-) ).
Any help gratefully appreciated - FTR: that particular Storage Domain have a bunch of VM Images on it which I'd rather import/recover than have to create from scratch.
Cheers
Dulux-Oz
10 months, 1 week
Deployment Error: Host is not up - Looking For Some Advice/Pointers
by Matthew J Black
Hi All,
So, on a fresh install on RL v9.3, we're getting a `Host is not up, please check logs, perhaps also on the engine machine` error.
This comes up right after the 20min timeout (the 120sec * 10 one).
No, the hosted-engine is not deployed (ie hosted-engine --check-deployed).
Obviously I need to check the logs, but which ones in particular (the hosted-engine-setup logs, obviously, but which other ones), where are they if the hosted-engine is not running, and, in an effort to narrow down the volume of info to a more manageable level, what should I be looking for?
Thanks in advance
10 months, 1 week
Please, Please Help - New oVirt Install/Deployment Failing - "Host is not up..."
by Matthew J Black
Hi Everyone,
Could someone please help me - I've been trying to do an install of oVirt for *weeks* (including false starts and self-inflicted wounds/errors) and it is still not working.
My setup:
- oVirt v4.5.3
- A brand new fresh vanilla install of RockyLinux 8.6 - all working AOK
- 2*NICs in a bond (802.3ad) with a couple of sub-Interfaces/VLANs - all working AOK
- All relevant IPv4 Address in DNS with Reverse Lookups - all working AOK
- All relevant IPv4 Address in "/etc/hosts" file - all working AOK
- IPv6 (using "method=auto" in the interface config file) enabled on the relevant sub-Interface/VLAN - I'm not using IPv6 on the network, only IPv4, but I'm trying to cover all the bases.
- All relevant Ports (as per the oVirt documentation) set up on the firewall
- ie firewall-cmd --add-service={{ libvirt-tls | ovirt-imageio | ovirt-vmconsole | vdsm }}
- All the relevant Repositories installed (ie RockyLinux BaseOS, AppStream, & PowerTools, and the EPEL, plus the ones from the oVirt documentation)
I have followed the oVirt documentation (including the special RHEL-instructions and RockyLinux-instructions) to the letter - no deviations, no special settings, exactly as they are written.
All the dnf installs, etc, went off without a hitch, including the "dnf install centos-release-ovirt45", "dnf install ovirt-engine-appliance", and "dnf install ovirt-hosted-engine-setup" - no errors anywhere.
Here is the results of a "dnf repolist":
- appstream Rocky Linux 8 - AppStream
- baseos Rocky Linux 8 - BaseOS
- centos-ceph-pacific CentOS-8-stream - Ceph Pacific
- centos-gluster10 CentOS-8-stream - Gluster 10
- centos-nfv-openvswitch CentOS-8 - NFV OpenvSwitch
- centos-opstools CentOS-OpsTools - collectd
- centos-ovirt45 CentOS Stream 8 - oVirt 4.5
- cs8-extras CentOS Stream 8 - Extras
- cs8-extras-common CentOS Stream 8 - Extras common packages
- epel Extra Packages for Enterprise Linux 8 - x86_64
- epel-modular Extra Packages for Enterprise Linux Modular 8 - x86_64
- ovirt-45-centos-stream-openstack-yoga CentOS Stream 8 - oVirt 4.5 - OpenStack Yoga Repository
- ovirt-45-upstream oVirt upstream for CentOS Stream 8 - oVirt 4.5
- powertools Rocky Linux 8 - PowerTools
So I kicked-off the oVirt deployment with: "hosted-engine --deploy --4 --ansible-extra-vars=he_offline_deployment=true".
I used "--ansible-extra-vars=he_offline_deployment=true" because without that flag I was getting "DNF timout" issues (see my previous post `Local (Deployment) VM Can't Reach "centos-ceph-pacific" Repo`).
I answer the defaults to all of questions the script asked, or entered the deployment-relevant answers where appropriate. In doing this I double-checked every answer before hitting <Enter>. Everything progressed smoothly until the deployment reached the "Wait for the host to be up" task... which then hung for more than 30 minutes before failing.
From the ovirt-hosted-engine-setup... log file:
- 2022-10-20 17:54:26,285+1100 ERROR otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:113 fatal: [localhost]: FAILED! => {"changed": false, "msg": "Host is not up, please check logs, perhaps also on the engine machine"}
I checked the following log files and found all of the relevant ERROR lines, then checked several 10s of proceeding and succeeding lines trying to determine what was going wrong, but I could not determine anything.
- ovirt-hosted-engine-setup...
- ovirt-hosted-engine-setup-ansible-bootstrap_local_vm...
- ovirt-hosted-engine-setup-ansible-final_clean... - not really relevant, I believe
I can include the log files (or the relevant parts of the log files) if people want - but that are very large: several 100 kilobytes each.
I also googled "oVirt Host is not up" and found several entries, but after reading them all the most relevant seems to be a thread from these mailing list: `Install of RHV 4.4 failing - "Host is not up, please check logs, perhaps also on the engine machine"` - but this seems to be talking about an upgrade and I didn't gleam anything useful from it - I could, of course, be wrong about that.
So my questions are:
- Where else should I be looking (ie other log files, etc, and possible where to find them)?
- Does anyone have any idea why this isn't working?
- Does anyone have a work-around (including a completely manual process to get things working - I don't mind working in the CLI with virsh, etc)?
- What am I doing wrong?
Please, I'm really stumped with this, and I really do need help.
Cheers
Dulux-Oz
10 months, 1 week
ovirt node ng 4.5.5 fresh install fails
by Levi Wilbert
I'm attempting to update our oVirt cluster to 4.5.5 from 4.5.4, running oVirt Node NG on the hosts.
When I tried updating a host through the oVirt Manager GUI, after the host reboots, it fails to start up and goes into emergency recovery mode:
[ 4.534872] localhost systemd[1]: Reached target Local File Systems.
[ 4.535119] localhost systemd[1]: Reached target System Initialization.
[ 4.535343] localhost systemd[1]: Reached target Basic System.
[ 4.536759] localhost systemd[1]: Started Hardware RNG Entropy Gatherer Daemon.
[ 4.541801] localhost rngd[1512]: Disabling 7: PKCS11 Entropy generator (pkcs11)
[ 4.541801] localhost rngd[1512]: Disabling 5: NIST Network Entropy Beacon (nist)
[ 4.541801] localhost rngd[1512]: Disabling 9: Qrypt quantum entropy beacon (qrypt)
[ 4.541801] localhost rngd[1512]: Initializing available sources
[ 4.542073] localhost rngd[1512]: [hwrng ]: Initialization Failed
[ 4.542073] localhost rngd[1512]: [rdrand]: Enabling RDSEED rng support
[ 4.542073] localhost rngd[1512]: [rdrand]: Initialized
[ 4.542073] localhost rngd[1512]: [jitter]: JITTER timeout set to 5 sec
[ 4.582381] localhost rngd[1512]: [jitter]: Initializing AES buffer
[ 8.309063] localhost rngd[1512]: [jitter]: Enabling JITTER rng support
[ 8.309063] localhost rngd[1512]: [jitter]: Initialized
[ 133.884355] localhost dracut-initqueue[1095]: Warning: dracut-initqueue: timeout, still waiting for following initqueue hooks:
[ 133.885349] localhost dracut-initqueue[1095]: Warning: /lib/dracut/hooks/initqueue/finished/devexists-\x2fdev\x2fdisk\x2fby-id\x2fmd-uuid-3f47cad8:fecb96ea:0ea37615:4e5dec4e.sh: "[ -e "/dev/disk/by-id/md-uuid-3f47cad8:fecb96ea:0ea37615:4e5dec4e" ]"
[ 133.886485] localhost dracut-initqueue[1095]: Warning: /lib/dracut/hooks/initqueue/finished/devexists-\x2fdev\x2fdisk\x2fby-id\x2fmd-uuid-d446b801:d515c112:116ff07f:9ae52466.sh: "[ -e "/dev/disk/by-id/md-uuid-d446b801:d515c112:116ff07f:9ae52466" ]"
[ 133.887619] localhost dracut-initqueue[1095]: Warning: /lib/dracut/hooks/initqueue/finished/devexists-\x2fdev\x2fonn\x2fovirt-node-ng-4.5.5-0.20231130.0+1.sh: "if ! grep -q After=remote-fs-pre.target /run/systemd/generator/systemd-cryptsetup(a)*.service 2>/dev/null; then
[ 133.887619] localhost dracut-initqueue[1095]: [ -e "/dev/onn/ovirt-node-ng-4.5.5-0.20231130.0+1" ]
[ 133.887619] localhost dracut-initqueue[1095]: fi"
[ 133.888667] localhost dracut-initqueue[1095]: Warning: /lib/dracut/hooks/initqueue/finished/devexists-\x2fdev\x2fonn\x2fswap.sh: "[ -e "/dev/onn/swap" ]"
[ 133.890050] localhost dracut-initqueue[1095]: Warning: dracut-initqueue: starting timeout scripts
[ 133.969228] localhost dracut-initqueue[7366]: Scanning devices md126p2 for LVM logical volumes onn/ovirt-node-ng-4.5.5-0.20231130.0+1
[ 133.969228] localhost dracut-initqueue[7366]: onn/swap
[ 134.001560] localhost dracut-initqueue[7366]: onn/ovirt-node-ng-4.5.5-0.20231130.0+1 thin
[ 134.001560] localhost dracut-initqueue[7366]: onn/swap linear
[ 134.014259] localhost dracut-initqueue[7381]: /etc/lvm/profile/imgbased-pool.profile: stat failed: No such file or directory
[ 134.532608] localhost dracut-initqueue[7381]: Check of pool onn/pool00 failed (status:64). Manual repair required!
I then attempted installing the oVirt Node NG 4.5.5 iso to a USB stick and tried installing that way, however, after going through the GUI and setting up storage, network, hostname, etc, the install fails shortly after clicking "Begin".
22:11:32,671 WARNING org.fedoraproject.Anaconda.Modules.Storage:INFO:blivet:executing action: [468] destroy device lvmthinlv onn-var_log_audit (id 216)
22:11:32,672 WARNING org.fedoraproject.Anaconda.Modules.Storage:DEBUG:blivet: LVMLogicalVolumeDevice.destroy: onn-var_log_audit ; status: False ;
22:11:32,673 WARNING org.fedoraproject.Anaconda.Modules.Storage:DEBUG:blivet: LVMLogicalVolumeDevice.teardown: onn-var_log_audit ; status: False ; controllable: False ;
22:11:32,674 WARNING org.fedoraproject.Anaconda.Modules.Storage:DEBUG:blivet: LVMVolumeGroupDevice.setup_parents: name: onn ; orig: True ;
22:11:32,674 WARNING org.fedoraproject.Anaconda.Modules.Storage:DEBUG:blivet: PartitionDevice.setup: Volume0_0p2 ; orig: True ; status: True ; controllable: True ;
22:11:32,675 WARNING org.fedoraproject.Anaconda.Modules.Storage:DEBUG:blivet: LVMPhysicalVolume.setup: device: /dev/md/Volume0_0p2 ; type: lvmpv ; status: False ;
22:11:32,676 WARNING org.fedoraproject.Anaconda.Modules.Storage:DEBUG:blivet: LVMLogicalVolumeDevice._destroy: onn-var_log_audit ; status: False ;
22:11:32,676 WARNING org.fedoraproject.Anaconda.Modules.Storage:INFO:program:Running [97] lvm lvremove --yes onn/var_log_audit --config= log {level=7 file=/tmp/lvm.log syslog=0} --devices=/dev/md/Volume0_0p2 ...
22:11:33,104 ERR rsyslogd:imjournal: open() failed for path: '/var/lib/rsyslog/imjournal.state.tmp': Operation not permitted [v8.2310.0-3.el9 try https://www.rsyslog.com/e/2433 ]
22:11:33,105 ERR rsyslogd:imjournal: open() failed for path: '/var/lib/rsyslog/imjournal.state.tmp': Operation not permitted [v8.2310.0-3.el9 try https://www.rsyslog.com/e/2433 ]
22:11:33,105 ERR rsyslogd:imjournal: open() failed for path: '/var/lib/rsyslog/imjournal.state.tmp': Operation not permitted [v8.2310.0-3.el9 try https://www.rsyslog.com/e/2433 ]
22:11:33,106 ERR rsyslogd:imjournal: open() failed for path: '/var/lib/rsyslog/imjournal.state.tmp': Operation not permitted [v8.2310.0-3.el9 try https://www.rsyslog.com/e/2433 ]
22:11:33,106 ERR rsyslogd:imjournal: open() failed for path: '/var/lib/rsyslog/imjournal.state.tmp': Operation not permitted [v8.2310.0-3.el9 try https://www.rsyslog.com/e/2433 ]
22:11:33,107 ERR rsyslogd:imjournal: open() failed for path: '/var/lib/rsyslog/imjournal.state.tmp': Operation not permitted [v8.2310.0-3.el9 try https://www.rsyslog.com/e/2433 ]
22:11:33,309 WARNING org.fedoraproject.Anaconda.Modules.Storage:INFO:program:stdout[97]:
22:11:33,310 WARNING org.fedoraproject.Anaconda.Modules.Storage:INFO:program:stderr[97]: /etc/lvm/profile/imgbased-pool.profile: stat failed: No such file or directory
22:11:33,310 WARNING org.fedoraproject.Anaconda.Modules.Storage: Check of pool onn/pool00 failed (status:64). Manual repair required!
I'm wondering if it has to do with installing oVirt node on a RAID mirror?
10 months, 1 week
Migrating from a self hosted engine to standalone
by redhat@intheoutback.com
Hi, I am in the process of moving our oVirt environment from a self hosted engine to a standalone engine on it's own HW. I have Googled and found a procedure for standalone > self hosted, but not the other way around.
My current situation is that I have 5 locations running oVirt 4.3 with 3 to 4 hypervisors and iSCSI storage backend, with a self hosted engine. All these locations are operational and short a downtime is acceptable if a must, losing a VM is NOT good.
I also have 1 oVirt environment 4.3 with 2 hypervisors and iSCSI backend that is my QA/Test.
All my networks are on internal networks with no outside world connections.
Most importantly, we are also looking at upgrading from 4.3 to 4.4
I have not found any straightforward way to migrate from self hosted to standalone.
My current plan is to do the following.
1) Create a new 4.4 standalone engine
2) Remove one hypervisor from the 4.3 cluster
3) Kickstart the hypervisor to RHEL 8.8 and configure ready for oVirt 4.4
4) Add the new host to the standalone engine.
5) Shutdown and export a number of VMs in the oVirt 4.3 and import them in to the new oVirt 4.4.
6) Repeat steps 2 > 5 until the everything is moved over.
Just wanting to get your expert opinions on this method or is there a much quicker easier method that will not risk the chances of losing the cluster/VMs or an extended outage.
Since we need to upgrade anyway from 4.3 to 4.4 I thought this the better method that upgrading the operational clusters.
Thanks
10 months, 1 week
Need to renew ovirt engine certificate
by Sachendra Shukla
Hi Team,
The oVirt Engine certificate is scheduled to expire on February 1, 2024.
Consequently, we need to initiate the certificate upgrade process. Could
you please share the steps and process for the certificate upgrade? I have
attached a snapshot below for your reference.
[image: image.png]
Regards,
Sachendra Shukla
Yagna iQ, Inc. and subsidiaries
HQ Address: Yagna iQ Inc. 7700 Windrose Ave, Suite G300, Plano, TX 75024,
USA 75024,
Website: https://yagnaiq.com
Contact Customer Support: support(a)yagnaiq.com
Privacy Policy: https://www.yagnaiq.com/privacy-policy/
*This communication and any attachments may contain confidential
information and/or copyright material of Yagna iQ, Inc. *
All unauthorized use, disclosure or distribution is prohibited. If you are
not the intended recipient, please notify Yagna iQ immediately by replying
to the email and destroy all copies of this communication.
This email has been scanned for all known viruses. The sender does not
accept liability for any damage inflicted by viewing the content of this
email.
10 months, 1 week
Re: [ovirt-devel] Re: oVirt 4.6 OS versions
by Guillaume Pavese
Unless someone from the community steps up to take RedHat's role, there
won't be any 4.6
On Fri, Jan 12, 2024 at 8:51 AM Diggy Mc <d03(a)bornfree.org> wrote:
>
> Isn't the oVirt 4.5 Hosted Engine built on CentOS Stream 8 ??? Stream 8
> ends in May 2024. I ask because we are still running on 4.4 and are
> thinking about holding off until oVirt 4.6 before we deploy a new oVirt
> environment.
> _______________________________________________
> Devel mailing list -- devel(a)ovirt.org
> To unsubscribe send an email to devel-leave(a)ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/MBQDZTC5K3R...
>
--
Ce message et toutes les pièces jointes (ci-après le “message”) sont
établis à l’intention exclusive de ses destinataires et sont confidentiels.
Si vous recevez ce message par erreur, merci de le détruire et d’en avertir
immédiatement l’expéditeur. Toute utilisation de ce message non conforme a
sa destination, toute diffusion ou toute publication, totale ou partielle,
est interdite, sauf autorisation expresse. L’internet ne permettant pas
d’assurer l’intégrité de ce message . Interactiv-group (et ses filiales)
décline(nt) toute responsabilité au titre de ce message, dans l’hypothèse
ou il aurait été modifié. IT, ES, UK.
<https://interactiv-group.com/disclaimer.html>
10 months, 1 week
oVirt 4.5.5 - Prb with qemu-kvm after upgrade
by Christophe GRENIER
Hello
I have a standalone oVirt Manager 4.5.5-1.el8 and two small clusters.
After upgrading ovir01001 in the "PreProd" cluster from AlmaLinux 8.8 to
8.9, the host was successfully activated but failed to take any VM.
centos-release-ceph-pacific.noarch 1.0-2.el8 @cs8-extras
centos-release-gluster10.noarch 1.0-1.el8s @cs8-extras-common
centos-release-nfv-common.noarch 1-3.el8 @cs8-extras
centos-release-nfv-openvswitch.noarch 1-3.el8 @cs8-extras
centos-release-opstools.noarch 1-12.el8 @cs8-extras
centos-release-ovirt45.noarch 8.9-1.el8s @cs8-extras-common
centos-release-storage-common.noarch 2-2.el8 @cs8-extras
centos-release-stream.x86_64 8.1-1.1911.0.7.el8 @cs8-extras
centos-release-virt-common.noarch 1-2.el8 @cs8-extras
vdsm.x86_64 4.50.5.1-1.el8 @centos-ovirt45
The problem has been "solved" by downgrading all qemu-* packages to the
version in AlmaLinux 8.8
ie. qemu-kvm-6.2.0-40.module_el8.9.0+3681+41cbbcc0.1.alma.1 =>
qemu-kvm-6.2.0-33.module_el8.8.0+3612+f18d2b89.alma.1.x86_64
Please find the relevent log:
- engine_when_failed.log https://pastebin.com/7MG6fYGY
- engine_when_ok.log https://pastebin.com/MegqmMbg
- vdsm_when_failed.log https://pastebin.com/ae4w0pix
- vdsm_when_ok.log https://pastebin.com/d7P0BWDN
Regards
--
,-~~-.___. ._.
/ | ' \ | |--------. Christophe GRENIER
( ) 0 | | | grenier(a)cgsecurity.org
\_/-, ,----' | | |
==== !_!-v---v--.
/ \-'~; .--------. TestDisk & PhotoRec
/ __/~| ._-""|| | Data Recovery
=( _____|_|____||________| https://www.cgsecurity.org
10 months, 1 week
Add Direct LUN to VM with Rest API
by LS CHENG
Hi
Anyone know how to add a Direct LUN to a VM o VM's?
I am trying to clone a couple of VM's fibre channel direct lun's with SAN's
snapshot technology and present those snapshots to another VM's, I would
like to do this with CLI but I cannot find any example for Fibre Channel
Disks and attach them to a VM.
Thanks!
10 months, 1 week
Configure OVN for oVirt failing - vdsm.tool.ovn_config.NetworkNotFoundError: hostname
by huw.m@twinstream.com
Hello,
When installing the self-hosted engine using rocky 9 as a host (using nightly builds), the install gets as far as running the below ansible task from ovirt-engine
- name: Configure OVN for oVirt
ansible.builtin.command: >
vdsm-tool ovn-config {{ ovn_central }} {{ ovn_tunneling_interface }} {{ ovn_host_fqdn }}
This command gets executed as vdsm-tool ovn-config 192.168.57.4 hostname.my.project.com
and fails with error
"stderr" : "Traceback (most recent call last):\n File \"/usr/lib/python3.9/site-packages/vdsm/tool/ovn_config.py\", line 117, in get_network\n return networks[net_name]\nKeyError: 'virt-1.local.hyp.twinstream.com'\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/usr/bin/vdsm-tool\", line 195, in main\n return tool_command[cmd][\"command\"](*args)\n File \"/usr/lib/python3.9/site-packages/vdsm/tool/ovn_config.py\", line 63, in ovn_config\n ip_address = get_ip_addr(get_network(network_caps(), net_name))\n File \"/usr/lib/python3.9/site-packages/vdsm/tool/ovn_config.py\", line 119, in get_network\n raise NetworkNotFoundError(net_name)\nvdsm.tool.ovn_config.NetworkNotFoundError: hostname.my.project.com"
Running `vdsm-tool list-nets` on the host gives an empty list.
`ip a` gives
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 52:54:00:6d:16:65 brd ff:ff:ff:ff:ff:ff
altname enp0s6
altname ens6
inet 192.168.121.29/24 brd 192.168.121.255 scope global dynamic noprefixroute eth0
valid_lft 2482sec preferred_lft 2482sec
inet6 fe80::5054:ff:fe6d:1665/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 52:54:00:6b:f4:7b brd ff:ff:ff:ff:ff:ff
altname enp0s7
altname ens7
inet 192.168.56.151/24 brd 192.168.56.255 scope global noprefixroute eth1
valid_lft forever preferred_lft forever
inet6 fe80::5054:ff:fe6b:f47b/64 scope link
valid_lft forever preferred_lft forever
4: eth2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc fq_codel master bond0 state UP group default qlen 1000
link/ether 52:54:00:8f:40:45 brd ff:ff:ff:ff:ff:ff
altname enp0s8
altname ens8
5: eth3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 52:54:00:2f:27:9d brd ff:ff:ff:ff:ff:ff
altname enp0s9
altname ens9
6: eth4: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc fq_codel master bondstorage state UP group default qlen 1000
link/ether 52:54:00:b8:9b:d7 brd ff:ff:ff:ff:ff:ff
altname enp0s10
altname ens10
7: eth5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 52:54:00:c2:9a:bd brd ff:ff:ff:ff:ff:ff
altname enp0s11
altname ens11
8: eth6: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc fq_codel master bondvm state UP group default qlen 1000
link/ether 52:54:00:ed:f7:cc brd ff:ff:ff:ff:ff:ff
altname enp0s12
altname ens12
9: eth7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 52:54:00:de:8a:48 brd ff:ff:ff:ff:ff:ff
altname enp0s13
altname ens13
10: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 52:54:00:8f:40:45 brd ff:ff:ff:ff:ff:ff
inet 192.168.57.4/24 brd 192.168.57.255 scope global noprefixroute bond0
valid_lft forever preferred_lft forever
inet6 fe80::5054:ff:fe8f:4045/64 scope link
valid_lft forever preferred_lft forever
11: bondvm: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 52:54:00:ed:f7:cc brd ff:ff:ff:ff:ff:ff
inet6 fe80::5054:ff:feed:f7cc/64 scope link
valid_lft forever preferred_lft forever
12: bondstorage: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 52:54:00:b8:9b:d7 brd ff:ff:ff:ff:ff:ff
inet 192.168.59.4/24 brd 192.168.59.255 scope global noprefixroute bondstorage
valid_lft forever preferred_lft forever
inet6 fe80::5054:ff:feb8:9bd7/64 scope link
valid_lft forever preferred_lft forever
13: bondvm.20@bondvm: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 52:54:00:ed:f7:cc brd ff:ff:ff:ff:ff:ff
inet6 fe80::5054:ff:feed:f7cc/64 scope link
valid_lft forever preferred_lft forever
15: virbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 52:54:00:b2:5f:e2 brd ff:ff:ff:ff:ff:ff
inet 192.168.222.1/24 brd 192.168.222.255 scope global virbr0
valid_lft forever preferred_lft forever
16: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master virbr0 state UNKNOWN group default qlen 1000
link/ether fe:16:3e:34:3d:ea brd ff:ff:ff:ff:ff:ff
inet6 fe80::fc16:3eff:fe34:3dea/64 scope link
valid_lft forever preferred_lft forever
47: ;vdsmdummy;: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 6e:27:5f:fa:e3:3a brd ff:ff:ff:ff:ff:ff
48: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 12:7c:d9:2e:cf:26 brd ff:ff:ff:ff:ff:ff
49: br-int: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether a2:35:6e:5e:4c:60 brd ff:ff:ff:ff:ff:ff
bond0 was selected as the ovirtmgmt bridge NIC. It currently only has one member interface eth2 using balance-xor. In the ovirt management console I can the see host in a down state and given the rest of the playbook ran which requires ssh connectivity between hosted-engine and host, I believe the network is generally setup correctly.
No other immediate errors I can. As vdsm-tool ovn-config expects a network to exist with value of the hostname, what is meant to be creating this on the host?
Thanks,
Huw
10 months, 1 week