VDSM certs expired, manual renewal not working
by cen
Hi
Our VDSM certs have expired, both hosts are unassigned and can't be put
into maintenance from UI.
vdsm-client is not working, times out even with --insecure flag. Does
host and port need to be specified when run locally or should defaults work?
Error in console events is: Get Host Capabilities Failed: PKIX path
validation failed...
I followed a RHV guide for this exact situation and generated new vdsm
certificate using the ovirt-engine CA.
The new cert seems identical to the old one, everything matches (algos,
extensions, CA, CN, SAN etc) just new date.
After restarting libvirtd and vdsmd on the host with new cert in place
the host is still not reachable.
However, error message is now slightly different:
get Host Capabilities failed: Received fatal error: certificate_expired
Cert was replaced in the following locations:
/etc/pki/vdsm/certs/vdsmcert.pem
/etc/pki/vdsm/libvirt-spice/server-cert.pem
/etc/pki/libvirt/clientcert.pem
Is there another location missing? What else can I try?
All help appreciated in advance
1 year, 6 months
/tmp/lvm.log keeps growing in Host
by kanehisa@ktcsp.net
I don't understand why lvm.log is placed in /tmp directory without rotation.
I noticed this fact when I got the following event notification every 2 hours.
EventID :24
Message :Critical, Low disk space. Host ovirt01 has less than 500 MB of free space left on: /tmp. Low disk space might cause an issue upgrading this host.
As a workaround, I added a log rotation setting to /tmp/lvm.log, but is this the correct way?
I should have understood the contents of the Python program below before asking the question,
but please forgive me because I am not very knowledgeable about Python.
# cat /usr/lib/python3.6/site-packages/blivet/devicelibs/lvm.py | grep lvm.log
config_string += "log {level=7 file=/tmp/lvm.log syslog=0}"
Thanks in advance!!
Further information is below
[root@ovirt01 ~]# cat /etc/os-release
NAME="CentOS Stream"
VERSION="8"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="8.7.2206.0"
VARIANT="oVirt Node 4.5.4"
VARIANT_ID="ovirt-node"
PRETTY_NAME="oVirt Node 4.5.4"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:8"
HOME_URL="https://www.ovirt.org/"
BUG_REPORT_URL="https://bugzilla.redhat.com/"
PLATFORM_ID="platform:el8"
[root@ovirt01 ~]# uname -a
Linux ovirt01 4.18.0-408.el8.x86_64 #1 SMP Mon Jul 18 17:42:52 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
[root@ovirt01 ~]# df -h | grep -E " /tmp|Filesystem"
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/onn_ovirt01-tmp 1014M 515M 499M 51% /tmp
[root@ovirt01 ~]# stat /tmp/lvm.log
File: /tmp/lvm.log
Size: 463915707 Blocks: 906088 IO Block: 4096 regular file
Device: fd0eh/64782d Inode: 137 Links: 1
Access: (0600/-rw-------) Uid: ( 0/ root) Gid: ( 0/ root)
Context: system_u:object_r:lvm_tmp_t:s0
Access: 2023-02-03 10:30:06.605936740 +0900
Modify: 2023-02-03 09:52:19.712301285 +0900
Change: 2023-02-03 09:52:19.712301285 +0900
Birth: 2023-01-16 01:06:02.768495837 +0900
1 year, 7 months
virt-viewer - working alternative
by lars.stolpe@bvg.de
Hi all,
i tryed to use virt-viewer 11 (Windows) with oVirt 4.4 . I also tryed virt-viewer 10, 9, 8, and 7.
The older versions can't handle ISO residing in data domains, the latest version is crashing if try the media button.
Almost all links on the website are dead, no user guide, no prerequisites list...
Since virt-viewer is not working properly and seems to be abandoned, i'm looking for a working alternative.
Is there a working way to get a console of the virtual machines running in oVirt 4.4?
1 year, 7 months
4.4.9 -> 4.4.10 Cannot start or migrate any VM (hotpluggable cpus requested exceeds the maximum cpus supported by KVM)
by Jillian Morgan
After upgrading the engine from 4.4.9 to 4.4.10, and then upgrading one
host, any attempt to migrate a VM to that host or start a VM on that host
results in the following error:
Number of hotpluggable cpus requested (16) exceeds the maximum cpus
supported by KVM (8)
While the version of qemu is the same across hosts, (
qemu-kvm-6.0.0-33.el8s.x86_64), I traced the difference to the upgraded
kernel on the new host. I have always run elrepo's kernel-ml on these hosts
to support bcache which RHEL's kernel doesn't support. The working hosts
still run kernel-ml-5.15.12. The upgraded host ran kernel-ml-5.17.0.
In case anyone else runs kernel-ml, have you run into this issue?
Does anyone know why KVM's KVM_CAP_MAX_VCPUS value is lowered on the new
kernel?
Does anyone know how to query the KVM capabilities from userspace without
writing a program leveraging kvm_ioctl()'s?
Related to this, it seems that ovirt and/or libvirtd always runs qmu-kvm
with an -smp argument of "maxcpus=16". This causes qemu's built-in check to
fail on the new kernel which is supporting max_vpus of 8.
Why does ovirt always request maxcpus=16?
And yes, before you say it, I know you're going to say that running
kernel-ml isn't supported.
--
Jillian Morgan (she/her) 🏳️⚧️
Systems & Networking Specialist
Primordial Software Group & I.T. Consultancy
https://www.primordial.ca
1 year, 7 months
ovirt 4.6 compatibility with el8
by Nathanaël Blanchet
Hello community!
I noticed that rhel9 doesn't not support anymore ISP2532-based 8Gb
Fibre Channel HBA.
Given that all of my ovirt hosts are ISP2532-based, I d'like to know if
ovirt 4.6 will be still compatible with el8 or will only support el9.
--
Nathanaël Blanchet
Administrateur Systèmes et Réseaux
Service Informatique et REseau (SIRE)
Département des systèmes d'information
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5
Tél. 33 (0)4 67 54 84 55
Fax 33 (0)4 67 54 84 14
blanchet(a)abes.fr
1 year, 7 months
oVirt hosted-engine deployment times out while "Wait for the host to be up"
by brwsergmslst@gmail.com
Hi all,
I am currently trying to deploy the hosted-engine without success.
Unfortunately I cannot see what I am missing here. It'd be nice if you could put an eye on it and help me out. Below is an excerpt of the logfile generated.
```
2023-03-22 20:34:10,417+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:115 TASK [ovirt.ovirt.hosted_engine_setup : Get active list of active firewalld zones]
2023-03-22 20:34:12,222+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:115 changed: [localhost]
2023-03-22 20:34:13,527+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:115 TASK [ovirt.ovirt.hosted_engine_setup : Configure libvirt firewalld zone]
2023-03-22 20:34:20,246+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:115 changed: [localhost]
2023-03-22 20:34:21,550+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:115 TASK [ovirt.ovirt.hosted_engine_setup : Reload firewall-cmd]
2023-03-22 20:34:23,957+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:115 changed: [localhost]
2023-03-22 20:34:25,462+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:115 TASK [ovirt.ovirt.hosted_engine_setup : Add host]
2023-03-22 20:34:27,469+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:115 changed: [localhost]
2023-03-22 20:34:28,672+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:115 TASK [ovirt.ovirt.hosted_engine_setup : Include after_add_host tasks files]
2023-03-22 20:34:30,678+0100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:109 TASK [ovirt.ovirt.hosted_engine_setup : Let the user connect to the bootstrap engine VM to manually fix host configuration]
2023-03-22 20:34:31,882+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:115 skipping: [localhost]
2023-03-22 20:34:32,987+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:115 TASK [ovirt.ovirt.hosted_engine_setup : include_tasks]
2023-03-22 20:34:33,890+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:115 skipping: [localhost]
2023-03-22 20:34:34,893+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:115 TASK [ovirt.ovirt.hosted_engine_setup : include_tasks]
2023-03-22 20:34:36,096+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:115 ok: [localhost]
2023-03-22 20:34:37,100+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:115 TASK [ovirt.ovirt.hosted_engine_setup : Always revoke the SSO token]
2023-03-22 20:34:38,705+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:115 ok: [localhost]
2023-03-22 20:34:40,111+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:115 TASK [ovirt.ovirt.hosted_engine_setup : include_tasks]
2023-03-22 20:34:41,015+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:115 ok: [localhost]
2023-03-22 20:34:42,020+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:115 TASK [ovirt.ovirt.hosted_engine_setup : Obtain SSO token using username/password credentials]
2023-03-22 20:34:44,028+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:115 ok: [localhost]
2023-03-22 20:34:45,032+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:115 TASK [ovirt.ovirt.hosted_engine_setup : Wait for the host to be up]
2023-03-22 20:56:33,746+0100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:109 {'changed': False, 'ovirt_hosts': [{'href': '/ovirt-engine/api/hosts/17ad8088-c9e9-433f-90b9-ce8023a625e6', 'comment': '', '
id': '17ad8088-c9e9-433f-90b9-ce8023a625e6', 'name': 'vhost-tmp01.example.com', 'address': 'vhost-tmp01.example.com', 'affinity_labels': [], 'auto_numa_status': 'unknown', 'certificate': {'organization': 'example.com', 'subject': 'O=example.com,CN=vhost-tmp01.example.com'}, 'cluster': {'href': '/ovirt-engine/api/clusters/4effacb7-e5dd-4e52-86c9-90ebd2aafa0d', 'id': '4effacb7-e5dd-4e52-86c9-90ebd2aafa0d'}, 'cpu
': {'speed': 0.0, 'topology': {}}, 'cpu_units': [], 'device_passthrough': {'enabled': False}, 'devices': [], 'external_network_provider_configurations': [], 'external_status': 'ok', 'hardware_information': {'supported_rng_sources': []}, 'h
ooks': [], 'katello_errata': [], 'kdump_status': 'unknown', 'ksm': {'enabled': False}, 'max_scheduling_memory': 0, 'memory': 0, 'network_attachments': [], 'nics': [], 'numa_nodes': [], 'numa_supported': False, 'os': {'custom_kernel_cmdline
': ''}, 'ovn_configured': False, 'permissions': [], 'port': 54321, 'power_management': {'automatic_pm_enabled': True, 'enabled': False, 'kdump_detection': True, 'pm_proxies': []}, 'protocol': 'stomp', 'reinstallation_required': False, 'se_linux': {}, 'spm': {'priority': 5, 'status': 'none'}, 'ssh': {'fingerprint': 'SHA256:IDhzMTO49OoNIHmLGnOIGwTn8LQB/lYrJdUnak144Q4', 'port': 22, 'public_key': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNWbUlwbIiSliqc6OCFwG4w6/OaJb63JijJ1okaj6Y3gqNPO2XZWDTfwraIqm0S0SGlVk/g0oYcYIQ/0hU5Q+bE='}, 'statistics': [], 'status': 'install_failed', 'storage_connection_extensions': [], 'summary': {'total': 0}, 'tags': [], 'transparent_huge_pages': {'enabled': False}, 'type': 'rhel', 'unmanaged_networks': [], 'update_available': False, 'vgpu_placement': 'consolidated'}], 'invocation': {'module_args': {'pattern': 'name=vhost-tmp01.example.com', 'fetch_nested': False, 'nested_attributes': [], 'follow': [], 'all_content': False, 'cluster
_version': None}}, '_ansible_no_log': None, 'attempts': 120}
2023-03-22 20:56:33,847+0100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:109 ignored: [localhost]: FAILED! => {"attempts": 120, "changed": false, "ovirt_hosts": [{"address": "vhost-tmp01.example.com", "affinity_labels": [], "auto_numa_status": "unknown", "certificate": {"organization": "example.com", "subject": "O=example.com,CN=vhost-tmp01.example.com"}, "cluster": {"href": "/ovirt-engine/api/clusters/4effacb7-e5dd-4e52-86c9-90ebd2aafa0d", "id": "4effacb7-e5dd-4e52-86c9-90ebd2aafa0d"}, "comment": "", "cpu": {"speed": 0.0, "topology": {}}, "cpu_units": [], "device_passthrough": {"enabled": false}, "devices": [], "external_network_provider_configurations": [], "external_status": "ok", "hardware_information": {"supported_rng_sources": []}, "hooks": [], "href": "/ovirt-engine/api/hosts/17ad8088-c9e9-433f-90b9-ce8023a625e6", "id": "17ad8088-c9e9-433f-90b9-ce8023a625e6", "katello_errata": [], "kdump_status": "unknown", "ksm": {"enabled": false}, "max_
scheduling_memory": 0, "memory": 0, "name": "vhost-tmp01.example.com", "network_attachments": [], "nics": [], "numa_nodes": [], "numa_supported": false, "os": {"custom_kernel_cmdline": ""}, "ovn_configured": false, "permissions": [], "port": 54321, "power_management": {"automatic_pm_enabled": true, "enabled": false, "kdump_detection": true, "pm_proxies": []}, "protocol": "stomp", "reinstallation_required": false, "se_linux": {}, "spm": {"priority": 5, "status": "none"}, "ssh": {"fingerprint": "SHA256:IDhzMTO49OoNIHmLGnOIGwTn8LQB/lYrJdUnak144Q4", "port": 22, "public_key": "ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNWbUlwbIiSliqc6OCFwG4w6/OaJb63JijJ1okaj6Y3gqNPO2XZWDTfwraIqm0S0SGlVk/g0oYcYIQ/0hU5Q+bE="}, "statistics": [], "status": "install_failed", "storage_connection_extensions": [], "summary": {"total": 0}, "tags": [], "transparent_huge_pages": {"enabled": false}, "type": "rhel", "unmanaged_networks": [], "update_available": false, "vgpu_placement": "
consolidated"}]}
2023-03-22 20:56:34,750+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:115 TASK [ovirt.ovirt.hosted_engine_setup : Notify the user about a failure]
2023-03-22 20:56:35,655+0100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:109 {'msg': 'Host is not up, please check logs, perhaps also on the engine machine', '_ansible_no_log': None, 'changed': False}
2023-03-22 20:56:35,755+0100 ERROR otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:113 fatal: [localhost]: FAILED! => {"changed": false, "msg": "Host is not up, please check logs, perhaps also on the engine machine"}
```
Thank you in advance.
Best regards
1 year, 7 months
IBM ESS remote filesystem as POSIX compliant fs?
by arc@b4restore.com
Hi all,
We are trying to mount a remote filesystem in oVirt from an IBM ESS3500. but it seems to be a little against us.
everyting i try to mount it i get this in supervdsm.log(two different tries):
MainProcess|jsonrpc/7::DEBUG::2023-03-31 10:55:41,808::supervdsm_server::78::SuperVdsm.ServerCallback::(wrapper) call mount with (<vdsm.supervdsm_server._SuperVdsm object at 0x7f7595dba9b0>, '/essovirt01', '/rhev/data-center/mnt/_essovirt01') {'mntOpts': 'rw,relatime,dev=essovirt01', 'vfstype': 'gpfs', 'cgroup': None}
MainProcess|jsonrpc/7::DEBUG::2023-03-31 10:55:41,808::commands::217::root::(execCmd) /usr/bin/taskset --cpu-list 0-63 /usr/bin/mount -t gpfs -o rw,relatime,dev=essovirt01 /essovirt01 /rhev/data-center/mnt/_essovirt01 (cwd None)
MainProcess|jsonrpc/7::DEBUG::2023-03-31 10:55:41,941::commands::230::root::(execCmd) FAILED: <err> = b'mount: /rhev/data-center/mnt/_essovirt01: mount(2) system call failed: Stale file handle.\n'; <rc> = 32
MainProcess|jsonrpc/7::ERROR::2023-03-31 10:55:41,941::supervdsm_server::82::SuperVdsm.ServerCallback::(wrapper) Error in mount
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/vdsm/supervdsm_server.py", line 80, in wrapper
res = func(*args, **kwargs)
File "/usr/lib/python3.6/site-packages/vdsm/supervdsm_server.py", line 119, in mount
cgroup=cgroup)
File "/usr/lib/python3.6/site-packages/vdsm/storage/mount.py", line 263, in _mount
_runcmd(cmd)
File "/usr/lib/python3.6/site-packages/vdsm/storage/mount.py", line 291, in _runcmd
raise MountError(cmd, rc, out, err)
vdsm.storage.mount.MountError: Command ['/usr/bin/mount', '-t', 'gpfs', '-o', 'rw,relatime,dev=essovirt01', '/essovirt01', '/rhev/data-center/mnt/_essovirt01'] failed with rc=32 out=b'' err=b'mount: /rhev/data-center/mnt/_essovirt01: mount(2) system call failed: Stale file handle.\n'
MainProcess|mpathhealth::DEBUG::2023-03-31 10:55:48,993::supervdsm_server::78::SuperVdsm.ServerCallback::(wrapper) call dmsetup_run_status with ('multipath',) {}
MainProcess|mpathhealth::DEBUG::2023-03-31 10:55:48,993::commands::137::common.commands::(start) /usr/bin/taskset --cpu-list 0-63 /usr/sbin/dmsetup status --target multipath (cwd None)
MainProcess|mpathhealth::DEBUG::2023-03-31 10:55:49,000::commands::82::common.commands::(run) SUCCESS: <err> = b''; <rc> = 0
MainProcess|mpathhealth::DEBUG::2023-03-31 10:55:49,000::supervdsm_server::85::SuperVdsm.ServerCallback::(wrapper) return dmsetup_run_status with b'360050764008100e42800000000000223: 0 629145600 multipath 2 0 1 0 2 1 A 0 1 2 8:192 A 0 0 1 E 0 1 2 8:144 A 0 0 1 \n360050764008100e42800000000000229: 0 629145600 multipath 2 0 1 0 2 1 A 0 1 2 8:208 A 0 0 1 E 0 1 2 8:160 A 0 0 1 \n360050764008100e4280000000000022a: 0 10485760 multipath 2 0 1 0 2 1 A 0 1 2 8:176 A 0 0 1 E 0 1 2 8:224 A 0 0 1 \n360050764008100e42800000000000260: 0 1048576000 multipath 2 0 1 0 2 1 A 0 2 2 8:16 A 0 0 1 8:80 A 0 0 1 E 0 2 2 8:48 A 0 0 1 8:112 A 0 0 1 \n360050764008100e42800000000000261: 0 209715200 multipath 2 0 1 0 2 1 A 0 2 2 8:64 A 0 0 1 8:128 A 0 0 1 E 0 2 2 8:32 A 0 0 1 8:96 A 0 0 1 \n360050764008102edd8000000000001ab: 0 8589934592 multipath 2 0 1 0 2 1 A 0 1 2 65:128 A 0 0 1 E 0 1 2 65:32 A 0 0 1 \n360050764008102f558000000000001a9: 0 8589934592 multipath 2 0 1 0 2 1 A 0 1 2 65:16 A 0 0 1 E 0 1 2 65:48 A
0 0 1 \n3600507640081820ce800000000000077: 0 838860800 multipath 2 0 1 0 2 1 A 0 1 2 8:240 A 0 0 1 E 0 1 2 65:0 A 0 0 1 \n3600507680c800058d000000000000484: 0 20971520 multipath 2 0 1 0 2 1 A 0 1 2 65:160 A 0 0 1 E 0 1 2 66:80 A 0 0 1 \n3600507680c800058d000000000000485: 0 20971520 multipath 2 0 1 0 2 1 A 0 1 2 65:176 A 0 0 1 E 0 1 2 66:96 A 0 0 1 \n3600507680c800058d000000000000486: 0 20971520 multipath 2 0 1 0 2 1 A 0 1 2 66:112 A 0 0 1 E 0 1 2 65:192 A 0 0 1 \n3600507680c800058d000000000000487: 0 20971520 multipath 2 0 1 0 2 1 A 0 1 2 66:128 A 0 0 1 E 0 1 2 65:208 A 0 0 1 \n3600507680c800058d000000000000488: 0 20971520 multipath 2 0 1 0 2 1 A 0 1 2 65:224 A 0 0 1 E 0 1 2 66:144 A 0 0 1 \n3600507680c800058d000000000000489: 0 20971520 multipath 2 0 1 0 2 1 A 0 1 2 66:160 A 0 0 1 E 0 1 2 65:240 A 0 0 1 \n3600507680c800058d00000000000048a: 0 20971520 multipath 2 0 1 0 2 1 A 0 1 2 66:0 A 0 0 1 E 0 1 2 66:176 A 0 0 1 \n3600507680c800058d00000000000048b: 0 20971520 multipath 2 0 1 0 2 1
A 0 1 2 66:192 A 0 0 1 E 0 1 2 66:16 A 0 0 1 \n3600507680c800058d00000000000048c: 0 419430400 multipath 2 0 1 0 2 1 A 0 1 2 65:144 A 0 0 1 E 0 1 2 66:64 A 0 0 1 \n3600507680c800058d0000000000004b1: 0 41943040 multipath 2 0 1 0 2 1 A 0 1 2 66:208 A 0 0 1 E 0 1 2 66:32 A 0 0 1 \n3600507680c800058d0000000000004b2: 0 41943040 multipath 2 0 1 0 2 1 A 0 1 2 66:48 A 0 0 1 E 0 1 2 66:224 A 0 0 1 \n360050768108100c9d0000000000001aa: 0 8589934592 multipath 2 0 1 0 2 1 A 0 1 2 65:64 A 0 0 1 E 0 1 2 65:112 A 0 0 1 \n360050768108180ca48000000000001a9: 0 8589934592 multipath 2 0 1 0 2 1 A 0 1 2 65:96 A 0 0 1 E 0 1 2 65:80 A 0 0 1 \n'
MainProcess|jsonrpc/0::DEBUG::2023-03-31 10:55:49,938::supervdsm_server::78::SuperVdsm.ServerCallback::(wrapper) call mount with (<vdsm.supervdsm_server._SuperVdsm object at 0x7f7595dba9b0>, '/essovirt01', '/rhev/data-center/mnt/_essovirt01') {'mntOpts': 'rw,relatime', 'vfstype': 'gpfs', 'cgroup': None}
MainProcess|jsonrpc/0::DEBUG::2023-03-31 10:55:49,939::commands::217::root::(execCmd) /usr/bin/taskset --cpu-list 0-63 /usr/bin/mount -t gpfs -o rw,relatime /essovirt01 /rhev/data-center/mnt/_essovirt01 (cwd None)
MainProcess|jsonrpc/0::DEBUG::2023-03-31 10:55:49,944::commands::230::root::(execCmd) FAILED: <err> = b'mount: /rhev/data-center/mnt/_essovirt01: wrong fs type, bad option, bad superblock on /essovirt01, missing codepage or helper program, or other error.\n'; <rc> = 32
MainProcess|jsonrpc/0::ERROR::2023-03-31 10:55:49,944::supervdsm_server::82::SuperVdsm.ServerCallback::(wrapper) Error in mount
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/vdsm/supervdsm_server.py", line 80, in wrapper
res = func(*args, **kwargs)
File "/usr/lib/python3.6/site-packages/vdsm/supervdsm_server.py", line 119, in mount
cgroup=cgroup)
File "/usr/lib/python3.6/site-packages/vdsm/storage/mount.py", line 263, in _mount
_runcmd(cmd)
File "/usr/lib/python3.6/site-packages/vdsm/storage/mount.py", line 291, in _runcmd
raise MountError(cmd, rc, out, err)
vdsm.storage.mount.MountError: Command ['/usr/bin/mount', '-t', 'gpfs', '-o', 'rw,relatime', '/essovirt01', '/rhev/data-center/mnt/_essovirt01'] failed with rc=32 out=b'' err=b'mount: /rhev/data-center/mnt/_essovirt01: wrong fs type, bad option, bad superblock on /essovirt01, missing codepage or helper program, or other error.\n'
i have tried earlier in the same oVirt version two map a SAN LUN directly to the hypervisor and create a local gpfs filesystem and that was able to mount with the below parameters within the gui:
Storage Type : POSIX Compliant FS
HOST : The host that has scale installed and mounted
Path : /essovirt01
VFS Type: gpfs
Mount Options: rw,relatime,dev=essovirt01
This works. but with a remote filesystem it does not. im not sure why as it should pretty close to a local filesystem as it is granted with : File system access: essovirt01 (rw, root allowed) and mounted with /essovirt01 on /essovirt01 type gpfs (rw,relatime,seclabel)
Anybody have a clue what to do ? seems weird there should be such a difference from a locally owned to a remote owned fs when mounted through gpfs remotefs/remotecluster option.
and just to verify, i have given the filesystem the correct permission for ovirt : drwxr-xr-x. 2 vdsm kvm 262144 Mar 31 10:33 essovirt01
Thanks in advance.
Christiansen
1 year, 7 months
OVirt Self Hosted Enging VM issue
by linim@gdls.com
Hello
If this is a repeat discussion, I do apologize but I am looking to see if anyone else out there as had or heard of this issue happening?
I am attempting to install a KVM hosts with a Self-Hosted-Engine. When it goes to install the OVIRT engine on the VM being created under its temporary IP, I am noticing that the VM will disconnect, and I lose all network connectivity on it.
This is in attempt to install OLVM using the self-hosted-engine referencing oracle Doc ID 2933018.1. I have engaged oracle support, but I am just hoping that perhaps someone else has run into this issue and they can offer up avenues of investigation?
If his is outside the scope of an oVirt discussion as it is OLVM, then this can be closed out.
1 year, 7 months
Encrypted VNC request using SASL not maintained after VM migration
by Jon Sattelberger
I recently followed the instructions for enabling VNC encryption for FIPS enabled hosts [1]. The VNC console seem to be fine on the host where the VM is initially started (excluding noVNC in the browser). The qemu-kvm arguments are not maintained properly upon VM migration, declaring "password=on" in the -vnc argument. Subsequent VNC console requests will result in an authentication failure. SPICE seems to be fine. All hosts and the engine are FIPS enabled running oVirt-4.5.4-1.el8.
Is there a way to maintain the absence of "password=on"after VM migation? Perhaps a hook in the interim.
Initial VM start:
-object {"qom-type":"tls-creds-x509","id":"vnc-tls-creds0","dir":"/etc/pki/vdsm/libvirt-vnc","endpoint":"server","verify-peer":false} -vnc 192.168.100.67:0,tls-creds=vnc-tls-creds0,sasl=on,audiodev=audio1 -k en-us
Debug output from remote-viewer:
(remote-viewer:1495470): gtk-vnc-DEBUG: 12:51:55.812: vncconnection.c Possible VeNCrypt sub-auth 263
(remote-viewer:1495470): gtk-vnc-DEBUG: 12:51:55.812: vncconnection.c Emit main context 12
(remote-viewer:1495470): gtk-vnc-DEBUG: 12:51:55.812: vncconnection.c Requested auth subtype 263
(remote-viewer:1495470): gtk-vnc-DEBUG: 12:51:55.813: vncconnection.c Waiting for VeNCrypt auth subtype
(remote-viewer:1495470): gtk-vnc-DEBUG: 12:51:55.813: vncconnection.c Choose auth 263
(remote-viewer:1495470): gtk-vnc-DEBUG: 12:51:55.813: vncconnection.c Checking if credentials are needed
(remote-viewer:1495470): gtk-vnc-DEBUG: 12:51:55.813: vncconnection.c No credentials required
(remote-viewer:1495470): gtk-vnc-DEBUG: 12:51:55.813: vncconnection.c Read error Resource temporarily unavailable
(remote-viewer:1495470): gtk-vnc-DEBUG: 12:51:55.841: vncconnection.c Do TLS handshake
(remote-viewer:1495470): gtk-vnc-DEBUG: 12:51:55.944: vncconnection.c Checking if credentials are needed
(remote-viewer:1495470): gtk-vnc-DEBUG: 12:51:55.944: vncconnection.c Want a TLS clientname
... snip ...
Migrated VM:
-object {"qom-type":"tls-creds-x509","id":"vnc-tls-creds0","dir":"/etc/pki/vdsm/libvirt-vnc","endpoint":"server","verify-peer":false} -vnc 192.168.100.68:0,password=on,tls-creds=vnc-tls-creds0,sasl=on,audiodev=audio1 -k en-us
Debug output from remote-viewer:
(remote-viewer:1495270): gtk-vnc-DEBUG: 12:50:29.487: vncconnection.c Possible VeNCrypt sub-auth 261
(remote-viewer:1495270): gtk-vnc-DEBUG: 12:50:29.487: vncconnection.c Emit main context 12
(remote-viewer:1495270): gtk-vnc-DEBUG: 12:50:29.488: vncconnection.c Requested auth subtype 261
(remote-viewer:1495270): gtk-vnc-DEBUG: 12:50:29.488: vncconnection.c Waiting for VeNCrypt auth subtype
(remote-viewer:1495270): gtk-vnc-DEBUG: 12:50:29.488: vncconnection.c Choose auth 261
(remote-viewer:1495270): gtk-vnc-DEBUG: 12:50:29.488: vncconnection.c Checking if credentials are needed
(remote-viewer:1495270): gtk-vnc-DEBUG: 12:50:29.488: vncconnection.c No credentials required
... snip ...
(remote-viewer:1495270): gtk-vnc-DEBUG: 12:50:29.780: vncconnection.c Checking auth result
(remote-viewer:1495270): gtk-vnc-DEBUG: 12:50:29.808: vncconnection.c Fail Authentication failed
(remote-viewer:1495270): gtk-vnc-DEBUG: 12:50:29.808: vncconnection.c Error: Authentication failed
(remote-viewer:1495270): gtk-vnc-DEBUG: 12:50:29.808: vncconnection.c Emit main context 16
(remote-viewer:1495270): virt-viewer-WARNING **: 12:50:29.808: vnc-session: got vnc error Authentication failed
Thank you,
Jon
[1] https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/...
1 year, 7 months