oVirt 4.3 DWH with Grafana
by Vrgotic, Marko
Dear oVirt,
We are currently running oVirt 4.3 and upgrade/migration to 4.4 won’t be possible for few more months.
I am looking into guidelines, how to, for setting up Grafana using DataWarehouse as data source.
Did anyone already did this, and would be willing to share the steps?
Kindly awaiting your reply.
-----
kind regards/met vriendelijke groeten
Marko Vrgotic
Sr. System Engineer @ System Administration
ActiveVideo
o: +31 (35) 6774131
m: +31 (65) 5734174
e: m.vrgotic(a)activevideo.com<mailto:m.vrgotic@activevideo.com>
w: www.activevideo.com<http://www.activevideo.com>
ActiveVideo Networks BV. Mediacentrum 3745 Joop van den Endeplein 1.1217 WJ Hilversum, The Netherlands. The information contained in this message may be legally privileged and confidential. It is intended to be read only by the individual or entity to whom it is addressed or by their designee. If the reader of this message is not the intended recipient, you are on notice that any distribution of this message, in any form, is strictly prohibited. If you have received this message in error, please immediately notify the sender and/or ActiveVideo Networks, LLC by telephone at +1 408.931.9200 and delete or destroy any copy of this message.
3 years, 2 months
Error when trying to change master storage domain
by Matthew Benstead
Hello,
I'm trying to decommission the old master storage domain in ovirt, and
replace it with a new one. All of the VMs have been migrated off of the
old master, and everything has been running on the new storage domain
for a couple months. But when I try to put the old domain into
maintenance mode I get an error.
Old Master: vm-storage-ssd
New Domain: vm-storage-ssd2
The error is:
Failed to Reconstruct Master Domain for Data Center EDC2
As well as:
Sync Error on Master Domain between Host daccs01 and oVirt Engine.
Domain: vm-storage-ssd is marked as Master in oVirt Engine database but
not on the Storage side. Please consult with Support on how to fix this
issue.
2021-07-28 11:41:34,870-07 WARN
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxy]
(EE-ManagedThreadFactory-engine-Thread-23) [] Master domain version is
not in sync between DB and VDSM. Domain vm-storage-ssd
marked as master, but the version in DB: 283 and in VDSM: 280
And:
Not stopping SPM on vds daccs01, pool id
f72ec125-69a1-4c1b-a5e1-313fcb70b6ff as there are uncleared tasks Task
'5fa9edf0-56c3-40e4-9327-47bf7764d28d', status 'finished'
After a couple minutes all the domains are marked as active again and
things continue, but vm-storage-ssd is still listed as the master
domain. Any thoughts?
This is on 4.3.10.4-1.el7 on CentOS 7.
engine=# SELECT storage_name, storage_pool_id, storage, status FROM
storage_pool_with_storage_domain ORDER BY storage_name;
storage_name | storage_pool_id
| storage | status
-----------------------+--------------------------------------+----------------------------------------+--------
compute1-iscsi-ssd | f72ec125-69a1-4c1b-a5e1-313fcb70b6ff |
yvUESE-yWUv-VIWL-qX90-aAq7-gK0I-EqppRL | 1
compute7-iscsi-ssd | f72ec125-69a1-4c1b-a5e1-313fcb70b6ff |
8ekHdv-u0RJ-B0FO-LUUK-wDWs-iaxb-sh3W3J | 1
export-domain-storage | f72ec125-69a1-4c1b-a5e1-313fcb70b6ff |
d3932528-6844-481a-bfed-542872ace9e5 | 1
iso-storage | f72ec125-69a1-4c1b-a5e1-313fcb70b6ff |
f800b7a6-6a0c-4560-8476-2f294412d87d | 1
vm-storage-7200rpm | f72ec125-69a1-4c1b-a5e1-313fcb70b6ff |
a0bff472-1348-4302-a5c7-f1177efa45a9 | 1
vm-storage-ssd | f72ec125-69a1-4c1b-a5e1-313fcb70b6ff |
95acd9a4-a6fb-4208-80dd-1c53d6aacad0 | 1
vm-storage-ssd2 | f72ec125-69a1-4c1b-a5e1-313fcb70b6ff |
829d0600-c3f7-4dae-a749-d7f05c6a6ca4 | 1
(7 rows)
Thanks,
-Matthew
--
3 years, 2 months
Template for Ubuntu 18.04 Server Issues
by jeremy_tourville@hotmail.com
I have built a system as a template on oVirt. Specifically, Ubuntu 18.04 server.
I am noticing an issue when creating new vms from that template. I used the check box for "seal template" when creating the template.
When I create a new Ubuntu VM I am getting duplicate IP addresses for all the machines created from the template.
It seems like the checkbox doesn't fully function as intended. I would need to do further manual steps to clear up this issue.
Has anyone else noticed this behavior? Is this expected or have I missed something?
Thanks for your input!
3 years, 2 months
Terrible Disk Performance on Windows 10 VM
by regloff@gmail.com
I recently installed a Windows 10 VM under oVirt 4.4.5.11-1.el8
Also installed the drivers using "virtio-win-1.9.16.iso" (Then re-installed them after updates just in case it helped)
I found a similar complaint with VMWare (https://communities.vmware.com/t5/VMware-Workstation-Pro/vmware-workstati...)
So I looked into that and made a registry change for the AHCI controller, as well as setting the 'viodiskcache' to write-back (Seen in another thread on here) - those two changes seemed to help.. marginally. But not much at all.
When I do just about anything, disk usage spikes to 100% and stays there for quite a while. Write speeds rarely break 100kb/sec.
Not even sure what to look for next. My Linux VMs don't seem to have this issue and the host it's running on is barely working at all. CPU and memory stay close to unused. oVirt didn't show a lot, but in task manager in the Windows VM - you can see disk queue just pegged completely.
I've given the VM 6GB of RAM, so that's not it. I even turned off paging in the Windows VM as well, to no avail.
This is an example of disk usage, just opening 'Groove Music' for the first time.
https://i.postimg.cc/FRLq28Mw/Disk-Activity.png
Any ideas? :)
3 years, 3 months
Restored engine backup: The provided authorization grant for the auth code has expired.
by Nicolás
Hi,
I'm restoring a full ovirt engine backup, having used the --scope=all
option, for oVirt 4.3.
I restored the backup on a fresh CentOS7 machine. The process went well,
but when trying to log into the restored authentication system I get the
following message which won't allow me to log in:
The provided authorization grant for the auth code has expired.
What does that mean and how can it be fixed?
Thanks.
Nicolás
3 years, 3 months
Self hosted engine installation - Failed to deploy the VM on ISCSI storage
by ericsitunew@gmail.com
I was trying to install the latest ovirt engine v4.4.7 but it failed at the following step when I was using the command line.
I have tried installing as user root or other sudoers, but same issue.
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Copy configuration archive to storage]
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": true, "cmd": ["dd", "bs=20480", "count=1", "oflag=direct", "if=/var/tmp/localvmV8yIIX/5607a2c3-7e94-4403-ad86-7c3bcc745b16", "of=/rhev/data-center/mnt/blockSD/61612a87-4b25-4e09-aecd-02b63c679cf1/images/f9ebe70c-58f4-4229-a149-a18a09571d08/5607a2c3-7e94-4403-ad86-xxxxxxxxxxxx"], "delta": "0:00:00.011759", "end": "2020-08-14 09:03:35.702527", "msg": "non-zero return code", "rc": 1, "start": "2020-08-14 09:03:35.690768", "stderr": "dd: failed to open â/var/tmp/localvmV8yIIX/5607a2c3-7e94-4403-ad86-7c3bcc745b16â: Permission denied", "stderr_lines": ["dd: failed to open â/var/tmp/localvmV8yIIX/5607a2c3-7e94-4403-ad86-xxxxxxxxxxxx: Permission denied"], "stdout": "", "stdout_lines": []}
[ ERROR ] Failed to execute stage 'Closing up': Failed executing ansible-playbook
[ INFO ] Stage: Clean up
[ INFO ] Cleaning temporary resources
Just curious if anyone hits the same issue? Thank you.
3 years, 3 months
Host not becoming active due to VDSM failure
by Vinícius Ferrão
Hello,
I have a host that's failing to bring up VDSM, the logs don't say anything specific, but there's a Python error about DHCP on it. Is there anyone with a similar issue?
[root@rhvpower ~]# systemctl status vdsmd
● vdsmd.service - Virtual Desktop Server Manager
Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled; vendor preset: disabled)
Active: inactive (dead)
Jul 30 01:53:40 rhvpower.local.versatushpc.com.br systemd[1]: Dependency failed for Virtual Desktop Server Manager.
Jul 30 01:53:40 rhvpower.local.versatushpc.com.br systemd[1]: vdsmd.service: Job vdsmd.service/start failed with result 'dependency'.
Jul 30 12:34:12 rhvpower.local.versatushpc.com.br systemd[1]: Dependency failed for Virtual Desktop Server Manager.
Jul 30 12:34:12 rhvpower.local.versatushpc.com.br systemd[1]: vdsmd.service: Job vdsmd.service/start failed with result 'dependency'.
[root@rhvpower ~]# systemctl start vdsmd
A dependency job for vdsmd.service failed. See 'journalctl -xe' for details.
On the logs I got the following messages:
==> /var/log/vdsm/upgrade.log <==
MainThread::DEBUG::2021-07-30 12:34:55,143::libvirtconnection::168::root::(get) trying to connect libvirt
MainThread::INFO::2021-07-30 12:34:55,167::netconfpersistence::238::root::(_clearDisk) Clearing netconf: /var/lib/vdsm/staging/netconf
MainThread::INFO::2021-07-30 12:34:55,178::netconfpersistence::188::root::(save) Saved new config RunningConfig({'ovirtmgmt': {'netmask': '255.255.255.0', 'bonding': 'bond0', 'ipv6autoconf': False, 'bridged': True, 'ipaddr': '10.20.0.106', 'defaultRoute': True, 'dhcpv6': False, 'gateway': '10.20.0.1', 'mtu': 1500, 'switch': 'legacy', 'stp': False, 'bootproto': 'none', 'nameservers': ['10.20.0.1']}, 'servers': {'vlan': 172, 'bonding': 'bond0', 'ipv6autoconf': False, 'bridged': True, 'dhcpv6': False, 'mtu': 1500, 'switch': 'legacy', 'defaultRoute': False, 'stp': False, 'bootproto': 'none', 'nameservers': []}, 'xcat-other': {'vlan': 2020, 'bonding': 'bond0', 'ipv6autoconf': False, 'bridged': True, 'dhcpv6': False, 'mtu': 1500, 'switch': 'legacy', 'defaultRoute': False, 'stp': False, 'bootproto': 'none', 'nameservers': []}, 'xcat-nodes1': {'vlan': 2021, 'bonding': 'bond0', 'ipv6autoconf': False, 'bridged': True, 'dhcpv6': False, 'mtu': 1500, 'switch': 'legacy', 'defaultRoute': False, 'stp': False, 'bootproto': 'none', 'nameservers': []}, 'xcat-nodes3': {'vlan': 2023, 'bonding': 'bond0', 'ipv6autoconf': False, 'bridged': True, 'dhcpv6': False, 'mtu': 1500, 'switch': 'legacy', 'defaultRoute': False, 'stp': False, 'bootproto': 'none', 'nameservers': []}, 'xcat-nodes2': {'vlan': 2022, 'bonding': 'bond0', 'ipv6autoconf': False, 'bridged': True, 'dhcpv6': False, 'mtu': 1500, 'switch': 'legacy', 'defaultRoute': False, 'stp': False, 'bootproto': 'none', 'nameservers': []}, 'nfs': {'vlan': 200, 'bonding': 'bond0', 'ipv6autoconf': False, 'bridged': True, 'dhcpv6': False, 'mtu': 1500, 'switch': 'legacy', 'defaultRoute': False, 'stp': False, 'bootproto': 'none', 'nameservers': []}, 'storage': {'vlan': 192, 'netmask': '255.255.255.240', 'bonding': 'bond0', 'ipv6autoconf': False, 'bridged': False, 'ipaddr': '192.168.10.6', 'dhcpv6': False, 'mtu': 1500, 'switch': 'legacy', 'defaultRoute': False, 'bootproto': 'none', 'nameservers': []}, 'xcat-nodes4': {'vlan': 2024, 'bonding': 'bond0', 'ipv6autoconf': False, 'bridged': True, 'dhcpv6': False, 'mtu': 1500, 'switch': 'legacy', 'defaultRoute': False, 'stp': False, 'bootproto': 'none', 'nameservers': []}}, {'bond0': {'nics': ['enP48p1s0f2', 'enP48p1s0f3'], 'options': 'mode=4', 'switch': 'legacy', 'hwaddr': '98:be:94:78:cc:72'}}, {}) to [/var/lib/vdsm/staging/netconf/nets,/var/lib/vdsm/staging/netconf/bonds,/var/lib/vdsm/staging/netconf/devices]
MainThread::INFO::2021-07-30 12:34:55,179::netconfpersistence::238::root::(_clearDisk) Clearing netconf: /var/lib/vdsm/persistence/netconf
MainThread::INFO::2021-07-30 12:34:55,188::netconfpersistence::188::root::(save) Saved new config PersistentConfig({'ovirtmgmt': {'netmask': '255.255.255.0', 'bonding': 'bond0', 'ipv6autoconf': False, 'bridged': True, 'ipaddr': '10.20.0.106', 'defaultRoute': True, 'dhcpv6': False, 'gateway': '10.20.0.1', 'mtu': 1500, 'switch': 'legacy', 'stp': False, 'bootproto': 'none', 'nameservers': ['10.20.0.1']}, 'servers': {'vlan': 172, 'bonding': 'bond0', 'ipv6autoconf': False, 'bridged': True, 'dhcpv6': False, 'mtu': 1500, 'switch': 'legacy', 'defaultRoute': False, 'stp': False, 'bootproto': 'none', 'nameservers': []}, 'xcat-other': {'vlan': 2020, 'bonding': 'bond0', 'ipv6autoconf': False, 'bridged': True, 'dhcpv6': False, 'mtu': 1500, 'switch': 'legacy', 'defaultRoute': False, 'stp': False, 'bootproto': 'none', 'nameservers': []}, 'xcat-nodes1': {'vlan': 2021, 'bonding': 'bond0', 'ipv6autoconf': False, 'bridged': True, 'dhcpv6': False, 'mtu': 1500, 'switch': 'legacy', 'defaultRoute': False, 'stp': False, 'bootproto': 'none', 'nameservers': []}, 'xcat-nodes3': {'vlan': 2023, 'bonding': 'bond0', 'ipv6autoconf': False, 'bridged': True, 'dhcpv6': False, 'mtu': 1500, 'switch': 'legacy', 'defaultRoute': False, 'stp': False, 'bootproto': 'none', 'nameservers': []}, 'xcat-nodes2': {'vlan': 2022, 'bonding': 'bond0', 'ipv6autoconf': False, 'bridged': True, 'dhcpv6': False, 'mtu': 1500, 'switch': 'legacy', 'defaultRoute': False, 'stp': False, 'bootproto': 'none', 'nameservers': []}, 'nfs': {'vlan': 200, 'bonding': 'bond0', 'ipv6autoconf': False, 'bridged': True, 'dhcpv6': False, 'mtu': 1500, 'switch': 'legacy', 'defaultRoute': False, 'stp': False, 'bootproto': 'none', 'nameservers': []}, 'storage': {'vlan': 192, 'netmask': '255.255.255.240', 'bonding': 'bond0', 'ipv6autoconf': False, 'bridged': False, 'ipaddr': '192.168.10.6', 'dhcpv6': False, 'mtu': 1500, 'switch': 'legacy', 'defaultRoute': False, 'bootproto': 'none', 'nameservers': []}, 'xcat-nodes4': {'vlan': 2024, 'bonding': 'bond0', 'ipv6autoconf': False, 'bridged': True, 'dhcpv6': False, 'mtu': 1500, 'switch': 'legacy', 'defaultRoute': False, 'stp': False, 'bootproto': 'none', 'nameservers': []}}, {'bond0': {'nics': ['enP48p1s0f2', 'enP48p1s0f3'], 'options': 'mode=4', 'switch': 'legacy', 'hwaddr': '98:be:94:78:cc:72'}}, {}) to [/var/lib/vdsm/persistence/netconf/nets,/var/lib/vdsm/persistence/netconf/bonds,/var/lib/vdsm/persistence/netconf/devices]
MainThread::DEBUG::2021-07-30 12:34:55,188::cmdutils::130::root::(exec_cmd) /usr/share/openvswitch/scripts/ovs-ctl status (cwd None)
==> /var/log/vdsm/supervdsm.log <==
restore-net::INFO::2021-07-30 12:34:55,924::restore_net_config::69::root::(_restore_sriov_config) Non persisted SRIOV devices found: {'0033:01:00.0', '0003:01:00.0'}
restore-net::INFO::2021-07-30 12:34:55,924::restore_net_config::458::root::(restore) starting network restoration.
restore-net::DEBUG::2021-07-30 12:34:55,942::restore_net_config::366::root::(_wait_for_for_all_devices_up) All devices are up.
restore-net::DEBUG::2021-07-30 12:34:55,968::cmdutils::130::root::(exec_cmd) /sbin/tc qdisc show (cwd None)
restore-net::DEBUG::2021-07-30 12:34:55,989::cmdutils::138::root::(exec_cmd) SUCCESS: <err> = b''; <rc> = 0
restore-net::DEBUG::2021-07-30 12:34:56,087::plugin::261::root::(_check_version_mismatch) NetworkManager version 1.30.0
restore-net::DEBUG::2021-07-30 12:34:56,088::context::144::root::(register_async) Async action: Retrieve applied config: ethernet enP48p1s0f2 started
restore-net::DEBUG::2021-07-30 12:34:56,088::context::144::root::(register_async) Async action: Retrieve applied config: ethernet enP48p1s0f3 started
restore-net::DEBUG::2021-07-30 12:34:56,088::context::144::root::(register_async) Async action: Retrieve applied config: bridge xcat-nodes3 started
restore-net::DEBUG::2021-07-30 12:34:56,089::context::144::root::(register_async) Async action: Retrieve applied config: bridge xcat-other started
restore-net::DEBUG::2021-07-30 12:34:56,089::context::144::root::(register_async) Async action: Retrieve applied config: bridge xcat-nodes4 started
restore-net::DEBUG::2021-07-30 12:34:56,089::context::144::root::(register_async) Async action: Retrieve applied config: bridge ovirtmgmt started
restore-net::DEBUG::2021-07-30 12:34:56,089::context::144::root::(register_async) Async action: Retrieve applied config: bond bond0 started
restore-net::DEBUG::2021-07-30 12:34:56,089::context::144::root::(register_async) Async action: Retrieve applied config: vlan bond0.192 started
restore-net::DEBUG::2021-07-30 12:34:56,089::context::144::root::(register_async) Async action: Retrieve applied config: vlan bond0.2020 started
restore-net::DEBUG::2021-07-30 12:34:56,090::context::144::root::(register_async) Async action: Retrieve applied config: vlan bond0.2022 started
restore-net::DEBUG::2021-07-30 12:34:56,090::context::144::root::(register_async) Async action: Retrieve applied config: vlan bond0.2024 started
restore-net::DEBUG::2021-07-30 12:34:56,090::context::144::root::(register_async) Async action: Retrieve applied config: vlan bond0.172 started
restore-net::DEBUG::2021-07-30 12:34:56,090::context::144::root::(register_async) Async action: Retrieve applied config: vlan bond0.2021 started
restore-net::DEBUG::2021-07-30 12:34:56,090::context::144::root::(register_async) Async action: Retrieve applied config: vlan bond0.2023 started
restore-net::DEBUG::2021-07-30 12:34:56,090::context::144::root::(register_async) Async action: Retrieve applied config: vlan bond0.200 started
restore-net::DEBUG::2021-07-30 12:34:56,090::context::144::root::(register_async) Async action: Retrieve applied config: bridge xcat-nodes1 started
restore-net::DEBUG::2021-07-30 12:34:56,090::context::144::root::(register_async) Async action: Retrieve applied config: bridge xcat-nodes2 started
restore-net::DEBUG::2021-07-30 12:34:56,091::context::144::root::(register_async) Async action: Retrieve applied config: bridge nfs started
restore-net::DEBUG::2021-07-30 12:34:56,091::context::144::root::(register_async) Async action: Retrieve applied config: bridge servers started
restore-net::DEBUG::2021-07-30 12:34:56,091::context::144::root::(register_async) Async action: Retrieve applied config: bridge virbr0 started
restore-net::DEBUG::2021-07-30 12:34:56,092::context::153::root::(finish_async) Async action: Retrieve applied config: ethernet enP48p1s0f2 finished
restore-net::DEBUG::2021-07-30 12:34:56,093::context::153::root::(finish_async) Async action: Retrieve applied config: ethernet enP48p1s0f3 finished
restore-net::DEBUG::2021-07-30 12:34:56,093::context::153::root::(finish_async) Async action: Retrieve applied config: bridge xcat-nodes3 finished
restore-net::DEBUG::2021-07-30 12:34:56,094::context::153::root::(finish_async) Async action: Retrieve applied config: bridge xcat-other finished
restore-net::DEBUG::2021-07-30 12:34:56,095::context::153::root::(finish_async) Async action: Retrieve applied config: bridge xcat-nodes4 finished
restore-net::DEBUG::2021-07-30 12:34:56,096::context::153::root::(finish_async) Async action: Retrieve applied config: bridge ovirtmgmt finished
restore-net::DEBUG::2021-07-30 12:34:56,097::context::153::root::(finish_async) Async action: Retrieve applied config: bond bond0 finished
restore-net::DEBUG::2021-07-30 12:34:56,098::context::153::root::(finish_async) Async action: Retrieve applied config: vlan bond0.192 finished
restore-net::DEBUG::2021-07-30 12:34:56,099::context::153::root::(finish_async) Async action: Retrieve applied config: vlan bond0.2020 finished
restore-net::DEBUG::2021-07-30 12:34:56,099::context::153::root::(finish_async) Async action: Retrieve applied config: vlan bond0.2022 finished
restore-net::DEBUG::2021-07-30 12:34:56,100::context::153::root::(finish_async) Async action: Retrieve applied config: vlan bond0.2024 finished
restore-net::DEBUG::2021-07-30 12:34:56,100::context::153::root::(finish_async) Async action: Retrieve applied config: vlan bond0.172 finished
restore-net::DEBUG::2021-07-30 12:34:56,101::context::153::root::(finish_async) Async action: Retrieve applied config: vlan bond0.2021 finished
restore-net::DEBUG::2021-07-30 12:34:56,101::context::153::root::(finish_async) Async action: Retrieve applied config: vlan bond0.2023 finished
restore-net::DEBUG::2021-07-30 12:34:56,102::context::153::root::(finish_async) Async action: Retrieve applied config: vlan bond0.200 finished
restore-net::DEBUG::2021-07-30 12:34:56,102::context::153::root::(finish_async) Async action: Retrieve applied config: bridge xcat-nodes1 finished
restore-net::DEBUG::2021-07-30 12:34:56,103::context::153::root::(finish_async) Async action: Retrieve applied config: bridge xcat-nodes2 finished
restore-net::DEBUG::2021-07-30 12:34:56,105::context::153::root::(finish_async) Async action: Retrieve applied config: bridge nfs finished
restore-net::DEBUG::2021-07-30 12:34:56,106::context::153::root::(finish_async) Async action: Retrieve applied config: bridge servers finished
restore-net::DEBUG::2021-07-30 12:34:56,107::context::153::root::(finish_async) Async action: Retrieve applied config: bridge virbr0 finished
restore-net::ERROR::2021-07-30 12:34:56,167::restore_net_config::462::root::(restore) restoration failed.
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/vdsm/network/restore_net_config.py", line 460, in restore
unified_restoration()
File "/usr/lib/python3.6/site-packages/vdsm/network/restore_net_config.py", line 112, in unified_restoration
classified_conf = _classify_nets_bonds_config(available_config)
File "/usr/lib/python3.6/site-packages/vdsm/network/restore_net_config.py", line 237, in _classify_nets_bonds_config
net_info = NetInfo(netswitch.configurator.netinfo())
File "/usr/lib/python3.6/site-packages/vdsm/network/netswitch/configurator.py", line 323, in netinfo
_netinfo = netinfo_get(vdsmnets, compatibility)
File "/usr/lib/python3.6/site-packages/vdsm/network/netinfo/cache.py", line 268, in get
return _get(vdsmnets)
File "/usr/lib/python3.6/site-packages/vdsm/network/netinfo/cache.py", line 76, in _get
extra_info.update(_get_devices_info_from_nmstate(state, devices))
File "/usr/lib/python3.6/site-packages/vdsm/network/netinfo/cache.py", line 165, in _get_devices_info_from_nmstate
nmstate.get_interfaces(state, filter=devices)
File "/usr/lib/python3.6/site-packages/vdsm/network/netinfo/cache.py", line 164, in <dictcomp>
for ifname, ifstate in six.viewitems(
File "/usr/lib/python3.6/site-packages/vdsm/network/nmstate/api.py", line 228, in is_dhcp_enabled
return util_is_dhcp_enabled(family_info)
File "/usr/lib/python3.6/site-packages/vdsm/network/nmstate/bridge_util.py", line 137, in is_dhcp_enabled
return family_info[InterfaceIP.ENABLED] and family_info[InterfaceIP.DHCP]
KeyError: 'dhcp'
The engine is fencing the machine constantly but it reboots and come back with the same issue after reboot.
Thanks all.
3 years, 3 months
Question about cloud-init
by tommy
Hello, I have a question about cloud-init:
After installing vm, I installed cloud-init immediately, and enabled its
four services, and then I can use cloud-init in the admin interface.
However, I don't know how cloud-init knows where to get the relevant
information (such as host name, SSH secret key, etc.) that I configured on
the admin interface ? For I haven't configured the data source for
cloud-init.
Why it can works ?
Thanks!
3 years, 3 months
Set fixed VNC/Spice Password for VMs.
by Merlin Timm
Good day to all,
I have a question about the console configuration of the VMs:
By default, for each console connection to a VM, a password is set for
120 seconds, after that you can't use it again. We currently have the
following concern:
We want to access and control the VMs via the VNC/Spice of the Ovirt
host. We have already tried to use the password from the console.vv for
the connection and that works so far. Unfortunately we have to do this
every 2 minutes when we want to connect again. We are currently building
an automatic test pipeline and for this we need to access the VMs
remotely before OS start and we want to be independent of a VNC server
on the guest. This is only possible if we could connect to the VNC/Spice
server from the Ovirt host.
My question: would it be possible to fix the password or read it out via
api every time you want to connect?
I would appreciate a reply very much!
Best regards
Merlin Timm
3 years, 3 months