ovirt 4.6 compatibility with el8
by Nathanaël Blanchet
Hello community!
I noticed that rhel9 doesn't not support anymore ISP2532-based 8Gb
Fibre Channel HBA.
Given that all of my ovirt hosts are ISP2532-based, I d'like to know if
ovirt 4.6 will be still compatible with el8 or will only support el9.
--
Nathanaël Blanchet
Administrateur Systèmes et Réseaux
Service Informatique et REseau (SIRE)
Département des systèmes d'information
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5
Tél. 33 (0)4 67 54 84 55
Fax 33 (0)4 67 54 84 14
blanchet(a)abes.fr
2 years
oVirt hosted-engine deployment times out while "Wait for the host to be up"
by brwsergmslst@gmail.com
Hi all,
I am currently trying to deploy the hosted-engine without success.
Unfortunately I cannot see what I am missing here. It'd be nice if you could put an eye on it and help me out. Below is an excerpt of the logfile generated.
```
2023-03-22 20:34:10,417+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:115 TASK [ovirt.ovirt.hosted_engine_setup : Get active list of active firewalld zones]
2023-03-22 20:34:12,222+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:115 changed: [localhost]
2023-03-22 20:34:13,527+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:115 TASK [ovirt.ovirt.hosted_engine_setup : Configure libvirt firewalld zone]
2023-03-22 20:34:20,246+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:115 changed: [localhost]
2023-03-22 20:34:21,550+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:115 TASK [ovirt.ovirt.hosted_engine_setup : Reload firewall-cmd]
2023-03-22 20:34:23,957+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:115 changed: [localhost]
2023-03-22 20:34:25,462+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:115 TASK [ovirt.ovirt.hosted_engine_setup : Add host]
2023-03-22 20:34:27,469+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:115 changed: [localhost]
2023-03-22 20:34:28,672+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:115 TASK [ovirt.ovirt.hosted_engine_setup : Include after_add_host tasks files]
2023-03-22 20:34:30,678+0100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:109 TASK [ovirt.ovirt.hosted_engine_setup : Let the user connect to the bootstrap engine VM to manually fix host configuration]
2023-03-22 20:34:31,882+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:115 skipping: [localhost]
2023-03-22 20:34:32,987+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:115 TASK [ovirt.ovirt.hosted_engine_setup : include_tasks]
2023-03-22 20:34:33,890+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:115 skipping: [localhost]
2023-03-22 20:34:34,893+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:115 TASK [ovirt.ovirt.hosted_engine_setup : include_tasks]
2023-03-22 20:34:36,096+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:115 ok: [localhost]
2023-03-22 20:34:37,100+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:115 TASK [ovirt.ovirt.hosted_engine_setup : Always revoke the SSO token]
2023-03-22 20:34:38,705+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:115 ok: [localhost]
2023-03-22 20:34:40,111+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:115 TASK [ovirt.ovirt.hosted_engine_setup : include_tasks]
2023-03-22 20:34:41,015+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:115 ok: [localhost]
2023-03-22 20:34:42,020+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:115 TASK [ovirt.ovirt.hosted_engine_setup : Obtain SSO token using username/password credentials]
2023-03-22 20:34:44,028+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:115 ok: [localhost]
2023-03-22 20:34:45,032+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:115 TASK [ovirt.ovirt.hosted_engine_setup : Wait for the host to be up]
2023-03-22 20:56:33,746+0100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:109 {'changed': False, 'ovirt_hosts': [{'href': '/ovirt-engine/api/hosts/17ad8088-c9e9-433f-90b9-ce8023a625e6', 'comment': '', '
id': '17ad8088-c9e9-433f-90b9-ce8023a625e6', 'name': 'vhost-tmp01.example.com', 'address': 'vhost-tmp01.example.com', 'affinity_labels': [], 'auto_numa_status': 'unknown', 'certificate': {'organization': 'example.com', 'subject': 'O=example.com,CN=vhost-tmp01.example.com'}, 'cluster': {'href': '/ovirt-engine/api/clusters/4effacb7-e5dd-4e52-86c9-90ebd2aafa0d', 'id': '4effacb7-e5dd-4e52-86c9-90ebd2aafa0d'}, 'cpu
': {'speed': 0.0, 'topology': {}}, 'cpu_units': [], 'device_passthrough': {'enabled': False}, 'devices': [], 'external_network_provider_configurations': [], 'external_status': 'ok', 'hardware_information': {'supported_rng_sources': []}, 'h
ooks': [], 'katello_errata': [], 'kdump_status': 'unknown', 'ksm': {'enabled': False}, 'max_scheduling_memory': 0, 'memory': 0, 'network_attachments': [], 'nics': [], 'numa_nodes': [], 'numa_supported': False, 'os': {'custom_kernel_cmdline
': ''}, 'ovn_configured': False, 'permissions': [], 'port': 54321, 'power_management': {'automatic_pm_enabled': True, 'enabled': False, 'kdump_detection': True, 'pm_proxies': []}, 'protocol': 'stomp', 'reinstallation_required': False, 'se_linux': {}, 'spm': {'priority': 5, 'status': 'none'}, 'ssh': {'fingerprint': 'SHA256:IDhzMTO49OoNIHmLGnOIGwTn8LQB/lYrJdUnak144Q4', 'port': 22, 'public_key': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNWbUlwbIiSliqc6OCFwG4w6/OaJb63JijJ1okaj6Y3gqNPO2XZWDTfwraIqm0S0SGlVk/g0oYcYIQ/0hU5Q+bE='}, 'statistics': [], 'status': 'install_failed', 'storage_connection_extensions': [], 'summary': {'total': 0}, 'tags': [], 'transparent_huge_pages': {'enabled': False}, 'type': 'rhel', 'unmanaged_networks': [], 'update_available': False, 'vgpu_placement': 'consolidated'}], 'invocation': {'module_args': {'pattern': 'name=vhost-tmp01.example.com', 'fetch_nested': False, 'nested_attributes': [], 'follow': [], 'all_content': False, 'cluster
_version': None}}, '_ansible_no_log': None, 'attempts': 120}
2023-03-22 20:56:33,847+0100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:109 ignored: [localhost]: FAILED! => {"attempts": 120, "changed": false, "ovirt_hosts": [{"address": "vhost-tmp01.example.com", "affinity_labels": [], "auto_numa_status": "unknown", "certificate": {"organization": "example.com", "subject": "O=example.com,CN=vhost-tmp01.example.com"}, "cluster": {"href": "/ovirt-engine/api/clusters/4effacb7-e5dd-4e52-86c9-90ebd2aafa0d", "id": "4effacb7-e5dd-4e52-86c9-90ebd2aafa0d"}, "comment": "", "cpu": {"speed": 0.0, "topology": {}}, "cpu_units": [], "device_passthrough": {"enabled": false}, "devices": [], "external_network_provider_configurations": [], "external_status": "ok", "hardware_information": {"supported_rng_sources": []}, "hooks": [], "href": "/ovirt-engine/api/hosts/17ad8088-c9e9-433f-90b9-ce8023a625e6", "id": "17ad8088-c9e9-433f-90b9-ce8023a625e6", "katello_errata": [], "kdump_status": "unknown", "ksm": {"enabled": false}, "max_
scheduling_memory": 0, "memory": 0, "name": "vhost-tmp01.example.com", "network_attachments": [], "nics": [], "numa_nodes": [], "numa_supported": false, "os": {"custom_kernel_cmdline": ""}, "ovn_configured": false, "permissions": [], "port": 54321, "power_management": {"automatic_pm_enabled": true, "enabled": false, "kdump_detection": true, "pm_proxies": []}, "protocol": "stomp", "reinstallation_required": false, "se_linux": {}, "spm": {"priority": 5, "status": "none"}, "ssh": {"fingerprint": "SHA256:IDhzMTO49OoNIHmLGnOIGwTn8LQB/lYrJdUnak144Q4", "port": 22, "public_key": "ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNWbUlwbIiSliqc6OCFwG4w6/OaJb63JijJ1okaj6Y3gqNPO2XZWDTfwraIqm0S0SGlVk/g0oYcYIQ/0hU5Q+bE="}, "statistics": [], "status": "install_failed", "storage_connection_extensions": [], "summary": {"total": 0}, "tags": [], "transparent_huge_pages": {"enabled": false}, "type": "rhel", "unmanaged_networks": [], "update_available": false, "vgpu_placement": "
consolidated"}]}
2023-03-22 20:56:34,750+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:115 TASK [ovirt.ovirt.hosted_engine_setup : Notify the user about a failure]
2023-03-22 20:56:35,655+0100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:109 {'msg': 'Host is not up, please check logs, perhaps also on the engine machine', '_ansible_no_log': None, 'changed': False}
2023-03-22 20:56:35,755+0100 ERROR otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:113 fatal: [localhost]: FAILED! => {"changed": false, "msg": "Host is not up, please check logs, perhaps also on the engine machine"}
```
Thank you in advance.
Best regards
2 years
IBM ESS remote filesystem as POSIX compliant fs?
by arc@b4restore.com
Hi all,
We are trying to mount a remote filesystem in oVirt from an IBM ESS3500. but it seems to be a little against us.
everyting i try to mount it i get this in supervdsm.log(two different tries):
MainProcess|jsonrpc/7::DEBUG::2023-03-31 10:55:41,808::supervdsm_server::78::SuperVdsm.ServerCallback::(wrapper) call mount with (<vdsm.supervdsm_server._SuperVdsm object at 0x7f7595dba9b0>, '/essovirt01', '/rhev/data-center/mnt/_essovirt01') {'mntOpts': 'rw,relatime,dev=essovirt01', 'vfstype': 'gpfs', 'cgroup': None}
MainProcess|jsonrpc/7::DEBUG::2023-03-31 10:55:41,808::commands::217::root::(execCmd) /usr/bin/taskset --cpu-list 0-63 /usr/bin/mount -t gpfs -o rw,relatime,dev=essovirt01 /essovirt01 /rhev/data-center/mnt/_essovirt01 (cwd None)
MainProcess|jsonrpc/7::DEBUG::2023-03-31 10:55:41,941::commands::230::root::(execCmd) FAILED: <err> = b'mount: /rhev/data-center/mnt/_essovirt01: mount(2) system call failed: Stale file handle.\n'; <rc> = 32
MainProcess|jsonrpc/7::ERROR::2023-03-31 10:55:41,941::supervdsm_server::82::SuperVdsm.ServerCallback::(wrapper) Error in mount
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/vdsm/supervdsm_server.py", line 80, in wrapper
res = func(*args, **kwargs)
File "/usr/lib/python3.6/site-packages/vdsm/supervdsm_server.py", line 119, in mount
cgroup=cgroup)
File "/usr/lib/python3.6/site-packages/vdsm/storage/mount.py", line 263, in _mount
_runcmd(cmd)
File "/usr/lib/python3.6/site-packages/vdsm/storage/mount.py", line 291, in _runcmd
raise MountError(cmd, rc, out, err)
vdsm.storage.mount.MountError: Command ['/usr/bin/mount', '-t', 'gpfs', '-o', 'rw,relatime,dev=essovirt01', '/essovirt01', '/rhev/data-center/mnt/_essovirt01'] failed with rc=32 out=b'' err=b'mount: /rhev/data-center/mnt/_essovirt01: mount(2) system call failed: Stale file handle.\n'
MainProcess|mpathhealth::DEBUG::2023-03-31 10:55:48,993::supervdsm_server::78::SuperVdsm.ServerCallback::(wrapper) call dmsetup_run_status with ('multipath',) {}
MainProcess|mpathhealth::DEBUG::2023-03-31 10:55:48,993::commands::137::common.commands::(start) /usr/bin/taskset --cpu-list 0-63 /usr/sbin/dmsetup status --target multipath (cwd None)
MainProcess|mpathhealth::DEBUG::2023-03-31 10:55:49,000::commands::82::common.commands::(run) SUCCESS: <err> = b''; <rc> = 0
MainProcess|mpathhealth::DEBUG::2023-03-31 10:55:49,000::supervdsm_server::85::SuperVdsm.ServerCallback::(wrapper) return dmsetup_run_status with b'360050764008100e42800000000000223: 0 629145600 multipath 2 0 1 0 2 1 A 0 1 2 8:192 A 0 0 1 E 0 1 2 8:144 A 0 0 1 \n360050764008100e42800000000000229: 0 629145600 multipath 2 0 1 0 2 1 A 0 1 2 8:208 A 0 0 1 E 0 1 2 8:160 A 0 0 1 \n360050764008100e4280000000000022a: 0 10485760 multipath 2 0 1 0 2 1 A 0 1 2 8:176 A 0 0 1 E 0 1 2 8:224 A 0 0 1 \n360050764008100e42800000000000260: 0 1048576000 multipath 2 0 1 0 2 1 A 0 2 2 8:16 A 0 0 1 8:80 A 0 0 1 E 0 2 2 8:48 A 0 0 1 8:112 A 0 0 1 \n360050764008100e42800000000000261: 0 209715200 multipath 2 0 1 0 2 1 A 0 2 2 8:64 A 0 0 1 8:128 A 0 0 1 E 0 2 2 8:32 A 0 0 1 8:96 A 0 0 1 \n360050764008102edd8000000000001ab: 0 8589934592 multipath 2 0 1 0 2 1 A 0 1 2 65:128 A 0 0 1 E 0 1 2 65:32 A 0 0 1 \n360050764008102f558000000000001a9: 0 8589934592 multipath 2 0 1 0 2 1 A 0 1 2 65:16 A 0 0 1 E 0 1 2 65:48 A
0 0 1 \n3600507640081820ce800000000000077: 0 838860800 multipath 2 0 1 0 2 1 A 0 1 2 8:240 A 0 0 1 E 0 1 2 65:0 A 0 0 1 \n3600507680c800058d000000000000484: 0 20971520 multipath 2 0 1 0 2 1 A 0 1 2 65:160 A 0 0 1 E 0 1 2 66:80 A 0 0 1 \n3600507680c800058d000000000000485: 0 20971520 multipath 2 0 1 0 2 1 A 0 1 2 65:176 A 0 0 1 E 0 1 2 66:96 A 0 0 1 \n3600507680c800058d000000000000486: 0 20971520 multipath 2 0 1 0 2 1 A 0 1 2 66:112 A 0 0 1 E 0 1 2 65:192 A 0 0 1 \n3600507680c800058d000000000000487: 0 20971520 multipath 2 0 1 0 2 1 A 0 1 2 66:128 A 0 0 1 E 0 1 2 65:208 A 0 0 1 \n3600507680c800058d000000000000488: 0 20971520 multipath 2 0 1 0 2 1 A 0 1 2 65:224 A 0 0 1 E 0 1 2 66:144 A 0 0 1 \n3600507680c800058d000000000000489: 0 20971520 multipath 2 0 1 0 2 1 A 0 1 2 66:160 A 0 0 1 E 0 1 2 65:240 A 0 0 1 \n3600507680c800058d00000000000048a: 0 20971520 multipath 2 0 1 0 2 1 A 0 1 2 66:0 A 0 0 1 E 0 1 2 66:176 A 0 0 1 \n3600507680c800058d00000000000048b: 0 20971520 multipath 2 0 1 0 2 1
A 0 1 2 66:192 A 0 0 1 E 0 1 2 66:16 A 0 0 1 \n3600507680c800058d00000000000048c: 0 419430400 multipath 2 0 1 0 2 1 A 0 1 2 65:144 A 0 0 1 E 0 1 2 66:64 A 0 0 1 \n3600507680c800058d0000000000004b1: 0 41943040 multipath 2 0 1 0 2 1 A 0 1 2 66:208 A 0 0 1 E 0 1 2 66:32 A 0 0 1 \n3600507680c800058d0000000000004b2: 0 41943040 multipath 2 0 1 0 2 1 A 0 1 2 66:48 A 0 0 1 E 0 1 2 66:224 A 0 0 1 \n360050768108100c9d0000000000001aa: 0 8589934592 multipath 2 0 1 0 2 1 A 0 1 2 65:64 A 0 0 1 E 0 1 2 65:112 A 0 0 1 \n360050768108180ca48000000000001a9: 0 8589934592 multipath 2 0 1 0 2 1 A 0 1 2 65:96 A 0 0 1 E 0 1 2 65:80 A 0 0 1 \n'
MainProcess|jsonrpc/0::DEBUG::2023-03-31 10:55:49,938::supervdsm_server::78::SuperVdsm.ServerCallback::(wrapper) call mount with (<vdsm.supervdsm_server._SuperVdsm object at 0x7f7595dba9b0>, '/essovirt01', '/rhev/data-center/mnt/_essovirt01') {'mntOpts': 'rw,relatime', 'vfstype': 'gpfs', 'cgroup': None}
MainProcess|jsonrpc/0::DEBUG::2023-03-31 10:55:49,939::commands::217::root::(execCmd) /usr/bin/taskset --cpu-list 0-63 /usr/bin/mount -t gpfs -o rw,relatime /essovirt01 /rhev/data-center/mnt/_essovirt01 (cwd None)
MainProcess|jsonrpc/0::DEBUG::2023-03-31 10:55:49,944::commands::230::root::(execCmd) FAILED: <err> = b'mount: /rhev/data-center/mnt/_essovirt01: wrong fs type, bad option, bad superblock on /essovirt01, missing codepage or helper program, or other error.\n'; <rc> = 32
MainProcess|jsonrpc/0::ERROR::2023-03-31 10:55:49,944::supervdsm_server::82::SuperVdsm.ServerCallback::(wrapper) Error in mount
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/vdsm/supervdsm_server.py", line 80, in wrapper
res = func(*args, **kwargs)
File "/usr/lib/python3.6/site-packages/vdsm/supervdsm_server.py", line 119, in mount
cgroup=cgroup)
File "/usr/lib/python3.6/site-packages/vdsm/storage/mount.py", line 263, in _mount
_runcmd(cmd)
File "/usr/lib/python3.6/site-packages/vdsm/storage/mount.py", line 291, in _runcmd
raise MountError(cmd, rc, out, err)
vdsm.storage.mount.MountError: Command ['/usr/bin/mount', '-t', 'gpfs', '-o', 'rw,relatime', '/essovirt01', '/rhev/data-center/mnt/_essovirt01'] failed with rc=32 out=b'' err=b'mount: /rhev/data-center/mnt/_essovirt01: wrong fs type, bad option, bad superblock on /essovirt01, missing codepage or helper program, or other error.\n'
i have tried earlier in the same oVirt version two map a SAN LUN directly to the hypervisor and create a local gpfs filesystem and that was able to mount with the below parameters within the gui:
Storage Type : POSIX Compliant FS
HOST : The host that has scale installed and mounted
Path : /essovirt01
VFS Type: gpfs
Mount Options: rw,relatime,dev=essovirt01
This works. but with a remote filesystem it does not. im not sure why as it should pretty close to a local filesystem as it is granted with : File system access: essovirt01 (rw, root allowed) and mounted with /essovirt01 on /essovirt01 type gpfs (rw,relatime,seclabel)
Anybody have a clue what to do ? seems weird there should be such a difference from a locally owned to a remote owned fs when mounted through gpfs remotefs/remotecluster option.
and just to verify, i have given the filesystem the correct permission for ovirt : drwxr-xr-x. 2 vdsm kvm 262144 Mar 31 10:33 essovirt01
Thanks in advance.
Christiansen
2 years
OVirt Self Hosted Enging VM issue
by linim@gdls.com
Hello
If this is a repeat discussion, I do apologize but I am looking to see if anyone else out there as had or heard of this issue happening?
I am attempting to install a KVM hosts with a Self-Hosted-Engine. When it goes to install the OVIRT engine on the VM being created under its temporary IP, I am noticing that the VM will disconnect, and I lose all network connectivity on it.
This is in attempt to install OLVM using the self-hosted-engine referencing oracle Doc ID 2933018.1. I have engaged oracle support, but I am just hoping that perhaps someone else has run into this issue and they can offer up avenues of investigation?
If his is outside the scope of an oVirt discussion as it is OLVM, then this can be closed out.
2 years
Encrypted VNC request using SASL not maintained after VM migration
by Jon Sattelberger
I recently followed the instructions for enabling VNC encryption for FIPS enabled hosts [1]. The VNC console seem to be fine on the host where the VM is initially started (excluding noVNC in the browser). The qemu-kvm arguments are not maintained properly upon VM migration, declaring "password=on" in the -vnc argument. Subsequent VNC console requests will result in an authentication failure. SPICE seems to be fine. All hosts and the engine are FIPS enabled running oVirt-4.5.4-1.el8.
Is there a way to maintain the absence of "password=on"after VM migation? Perhaps a hook in the interim.
Initial VM start:
-object {"qom-type":"tls-creds-x509","id":"vnc-tls-creds0","dir":"/etc/pki/vdsm/libvirt-vnc","endpoint":"server","verify-peer":false} -vnc 192.168.100.67:0,tls-creds=vnc-tls-creds0,sasl=on,audiodev=audio1 -k en-us
Debug output from remote-viewer:
(remote-viewer:1495470): gtk-vnc-DEBUG: 12:51:55.812: vncconnection.c Possible VeNCrypt sub-auth 263
(remote-viewer:1495470): gtk-vnc-DEBUG: 12:51:55.812: vncconnection.c Emit main context 12
(remote-viewer:1495470): gtk-vnc-DEBUG: 12:51:55.812: vncconnection.c Requested auth subtype 263
(remote-viewer:1495470): gtk-vnc-DEBUG: 12:51:55.813: vncconnection.c Waiting for VeNCrypt auth subtype
(remote-viewer:1495470): gtk-vnc-DEBUG: 12:51:55.813: vncconnection.c Choose auth 263
(remote-viewer:1495470): gtk-vnc-DEBUG: 12:51:55.813: vncconnection.c Checking if credentials are needed
(remote-viewer:1495470): gtk-vnc-DEBUG: 12:51:55.813: vncconnection.c No credentials required
(remote-viewer:1495470): gtk-vnc-DEBUG: 12:51:55.813: vncconnection.c Read error Resource temporarily unavailable
(remote-viewer:1495470): gtk-vnc-DEBUG: 12:51:55.841: vncconnection.c Do TLS handshake
(remote-viewer:1495470): gtk-vnc-DEBUG: 12:51:55.944: vncconnection.c Checking if credentials are needed
(remote-viewer:1495470): gtk-vnc-DEBUG: 12:51:55.944: vncconnection.c Want a TLS clientname
... snip ...
Migrated VM:
-object {"qom-type":"tls-creds-x509","id":"vnc-tls-creds0","dir":"/etc/pki/vdsm/libvirt-vnc","endpoint":"server","verify-peer":false} -vnc 192.168.100.68:0,password=on,tls-creds=vnc-tls-creds0,sasl=on,audiodev=audio1 -k en-us
Debug output from remote-viewer:
(remote-viewer:1495270): gtk-vnc-DEBUG: 12:50:29.487: vncconnection.c Possible VeNCrypt sub-auth 261
(remote-viewer:1495270): gtk-vnc-DEBUG: 12:50:29.487: vncconnection.c Emit main context 12
(remote-viewer:1495270): gtk-vnc-DEBUG: 12:50:29.488: vncconnection.c Requested auth subtype 261
(remote-viewer:1495270): gtk-vnc-DEBUG: 12:50:29.488: vncconnection.c Waiting for VeNCrypt auth subtype
(remote-viewer:1495270): gtk-vnc-DEBUG: 12:50:29.488: vncconnection.c Choose auth 261
(remote-viewer:1495270): gtk-vnc-DEBUG: 12:50:29.488: vncconnection.c Checking if credentials are needed
(remote-viewer:1495270): gtk-vnc-DEBUG: 12:50:29.488: vncconnection.c No credentials required
... snip ...
(remote-viewer:1495270): gtk-vnc-DEBUG: 12:50:29.780: vncconnection.c Checking auth result
(remote-viewer:1495270): gtk-vnc-DEBUG: 12:50:29.808: vncconnection.c Fail Authentication failed
(remote-viewer:1495270): gtk-vnc-DEBUG: 12:50:29.808: vncconnection.c Error: Authentication failed
(remote-viewer:1495270): gtk-vnc-DEBUG: 12:50:29.808: vncconnection.c Emit main context 16
(remote-viewer:1495270): virt-viewer-WARNING **: 12:50:29.808: vnc-session: got vnc error Authentication failed
Thank you,
Jon
[1] https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/...
2 years
Failing "change Master storage domain" from gluster to iscsi
by Diego Ercolani
In the current release of ovirt (4.5.4) I'm currently experiencing a fail in change master storage domain from a gluster volume to everywhere.
The GUI talk about a "general" error.
watching the engine log:
2023-03-28 11:51:16,601Z WARN [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-46) [] Unexpected return value: TaskStatus [code=331, message=value=Tar command failed: ({'reader': {'cmd': ['/usr/bin/tar', 'cf', '-', '--exclude=./lost+found', '-C', '/rhev/data-center/mnt/glusterSD/ovirt-node3.ovirt:_gv0/4745320f-bfc3-46c4-8849-b4fe8f1b2de6/master', '.'], 'rc': 1, 'err': '/usr/bin/tar: ./tasks/20a9aa7f-80f5-403b-b296-ea95d9fd3f97: file changed as we read it\n/usr/bin/tar: ./tasks/87783efa-42ac-4cd9-bda5-ad68c59bb881/87783efa-42ac-4cd9-bda5-ad68c59bb881.task: file changed as we read it\n'}},) abortedcode=331]
2023-03-28 11:51:16,601Z ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-46) [] Failed in 'HSMGetAllTasksStatusesVDS' method
Seeming that somewhat is changing file under the directory but:
[vdsm@ovirt-node2 4745320f-bfc3-46c4-8849-b4fe8f1b2de6]$ /usr/bin/tar -cf - --exclude=./lost+found -C '/rhev/data-center/mnt/glusterSD/ovirt-node3.ovirt:_gv0/4745320f-bfc3-46c4-8849-b4fe8f1b2de6/master' '.' > /tmp/tar.tar
/usr/bin/tar: ./tasks/20a9aa7f-80f5-403b-b296-ea95d9fd3f97: file changed as we read it
/usr/bin/tar: ./tasks: file changed as we read it
[vdsm@ovirt-node2 master]$ find '/rhev/data-center/mnt/glusterSD/ovirt-node3.ovirt:_gv0/4745320f-bfc3-46c4-8849-b4fe8f1b2de6/master' -mtime -1
/rhev/data-center/mnt/glusterSD/ovirt-node3.ovirt:_gv0/4745320f-bfc3-46c4-8849-b4fe8f1b2de6/master/tasks
[vdsm@ovirt-node2 master]$ ls -l /rhev/data-center/mnt/glusterSD/ovirt-node3.ovirt:_gv0/4745320f-bfc3-46c4-8849-b4fe8f1b2de6/master/
total 0
drwxr-xr-x. 6 vdsm kvm 182 Mar 28 11:51 tasks
drwxr-xr-x. 2 vdsm kvm 6 Mar 26 20:36 vms
[vdsm@ovirt-node2 master]$ date; stat tasks
Tue Mar 28 12:04:06 UTC 2023
File: tasks
Size: 182 Blocks: 0 IO Block: 131072 directory
Device: 31h/49d Inode: 12434008067414313592 Links: 6
Access: (0755/drwxr-xr-x) Uid: ( 36/ vdsm) Gid: ( 36/ kvm)
Context: system_u:object_r:fusefs_t:s0
Access: 2023-03-28 11:55:17.771046746 +0000
Modify: 2023-03-28 11:51:16.641145314 +0000
Change: 2023-03-28 11:51:16.641145314 +0000
Birth: -
It seem the task directory isn't touched since
2 years
clock skew in hosted engine and VMs due to slow IO storage
by Diego Ercolani
I don't know why (but I suppose is related to storage speed) the virtual machines tend to present a skew in the clock from some days to a century forward (2177)
I see in the journal of the engine:
Mar 28 13:19:40 ovirt-engine.ovirt NetworkManager[1158]: <info> [1680009580.2045] dhcp4 (eth0): state changed new lease, address=192.168.123.20
Mar 28 13:24:40 ovirt-engine.ovirt NetworkManager[1158]: <info> [1680009880.2042] dhcp4 (eth0): state changed new lease, address=192.168.123.20
Mar 28 13:29:40 ovirt-engine.ovirt NetworkManager[1158]: <info> [1680010180.2039] dhcp4 (eth0): state changed new lease, address=192.168.123.20
Apr 01 08:15:42 ovirt-engine.ovirt chronyd[1072]: Forward time jump detected!
Apr 01 08:15:42 ovirt-engine.ovirt NetworkManager[1158]: <info> [1680336942.4396] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Apr 01 08:15:42 ovirt-engine.ovirt chronyd[1072]: Can't synchronise: no selectable sources
When this happens in the hosted-engine tipically:
1. the DWH became unconsistent as I stated here: https://lists.ovirt.org/archives/list/users@ovirt.org/thread/KPW5FFKG3AI6... or https://lists.ovirt.org/archives/list/users@ovirt.org/thread/WUNZUSZ2ARRL...
2. the skew causes the engine to kick off the nodes that appears "down" in "connecting" state
This compromises all the task in pending state and raise countermeasures to the ovirt-engine manager and also vdsm daemon.
I currently tried to put in engine's crontab every 5 minutes a "hwclock --hctosys" as it seem the hwclock don't skew
2 years
How to enable the storage pool correctly
by ziyi Liu
The /var/ folder is full, I can't enter the web ui to set it, I can only use the command line mode
vdsm-client StorageDomain activate
vdsm-client StorageDomain attach
vdsm-client StoragePool connect
vdsm-client StoragePool connectStorageServer
I have tried these commands and they all prompt message=Unknown pool id, pool not connected
2 years
Failing vm backup
by Giulio Casella
Hi,
since yesterday backup for one of my VMs is failing. Backup is performed
by Storware vprotect, based on CBT strategy.
In ovirt events I can see:
VDSM host03 command StartNbdServerVDS failed: Bitmap does not exist:
"{'reason': 'Bitmap does not exist in
/rhev/data-center/mnt/blockSD/459011cf-ebb6-46ff-831d-8ccfafd82c8a/images/a9ec8085-6ac3-4be0-bbd2-7752f7e29368/083f5dc5-9003-4908-96b1-5f750b5a4197',
'bitmap': '1b88a937-0ab8-4f6e-a117-bbd522e21448'}"
Transfer was stopped by system. Reason: failed to create a signed image
ticket.
Backups for other VMs are working correctly. What could it be? Somehow
corrupted image?
I tried to migrate VM to other hypervisors, with no luck. Didn't tried
to move disks (two, one for the OS, one for data) to another data
domain, not so fast operation (about 2.8TB).
The VM is working correctly, but Murphy's law states this machine is a
fileserver with users' data and email :-(
Any hint?
TIA,
gc
--
Giulio Casella giulio at di.unimi.it
System and network architect
Computer Science Dept. - University of Milano
2 years