Fedora CoreOS
by lejeczek
Hi guys.
From what I gather there is no oVirt for Fedora CoreOS but
I should ask here at the source - is it there oVirt for that
OS and if there is not as of now, are the any plans or
discussion to make that reality?
many thanks, L.
3 years, 6 months
Data Centers status Not Operational
by nexpron@gmail.com
Hi everyone,
I found 3 servers (HV1, HV2, HV3) that contains VMs at another servers room. These servers are hosts based on KVM.
I logged into oVirt Engine Web Administration. Data Centers tab show me only one entry
Name: RCV
Storage: Shared
Status: Not Operational
Compability Version: 3.4
Description: [Empty]
Webbrowser oVirt Engine Web Administrator show me
oVirt Engine Version: 3.4.0-1.el6
One VM on HV3 has stopped, Hosts tab show Non Responsive Status for every host (HV1-3); every VM in Virtual Machines tab show Unknown status.
What do I'll do to change status in Data Center? How to start debug the reason? Last administrator leave documentation in vestigial form :)
DataCenter: RCV
-->Cluster: RCV_Cluster
---->Host: HV1 node
---->Host: Hv2 node
---->Host: HV3 engine
Best regards,
nexpron
3 years, 6 months
Hosted-engine fail and host reboot
by Dominique D
I tried with a 1 baremetal host, 3 baremetals or virtual hosts and I still have the same problem for installing the hosted-engine. Hyperconverge installs well.
I have tried with multiple version of ovirt ISO file 4.4.1 and 4.4.4 and 4.4.6.
when I run hosted-engine --deploy or with the cockpit, it creates a temporary VM in the 192.168.222.x subnet and I am able to connect in ssh on this temporary ip. When the script displays "TASK [ovirt.ovirt.hoted_engine_setup]: Wait for the host to be up" the server reboots and I have nothing left.
problem when creation of the ovirtmgmt?
here all my log https://drive.google.com/drive/folders/1kFFSlIqbjVwSN8t88aQZZR45DYEHPUvt?...
021-05-26 10:15:09,135-0400 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:111 TASK [ovirt.ovirt.hosted_engine_setup : Always revoke the SSO token]
2021-05-26 10:15:10,439-0400 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:105 {'msg': "You must specify either 'url' or 'hostname'.", 'invocation': {'module_args': {'state': 'absent', 'ovirt_auth': {'changed': False, 'ansible_facts': {'ovirt_auth': {'token': 'Mz2onwB7qWX2x8HnJVgetQIQ9U4eVziRt8TEabfoizI2B98d0PDp-yxTU92a9lbun2vcr_i5yOXRsJKJKhqkVw', 'url': 'https://oe.telecom.lan/ovirt-engine/api', 'ca_file': None, 'insecure': True, 'timeout': 0, 'compress': True, 'kerberos': False, 'headers': None}}, 'failed': False, 'attempts': 1}, 'timeout': 0, 'compress': True, 'kerberos': False, 'url': None, 'hostname': None, 'username': None, 'password': None, 'ca_file': None, 'insecure': None, 'headers': None, 'token': None}}, '_ansible_no_log': False, 'changed': False}
2021-05-26 10:15:10,540-0400 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:105 ignored: [localhost]: FAILED! => {"changed": false, "msg": "You must specify either 'url' or 'hostname'."}
2021-05-26 10:15:11,643-0400 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:111 TASK [ovirt.ovirt.hosted_engine_setup : include_tasks]
2021-05-26 10:15:12,647-0400 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:111 ok: [localhost]
2021-05-26 10:15:13,851-0400 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:111 TASK [ovirt.ovirt.hosted_engine_setup : Obtain SSO token using username/password credentials]
2021-05-26 10:15:15,261-0400 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:111 ok: [localhost]
2021-05-26 10:15:17,275-0400 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:111 TASK [ovirt.ovirt.hosted_engine_setup : Wait for the host to be up]
2021-05-26 10:22:54,758-0400 ERROR otopi.context context._executeMethod:154 Failed to execute stage 'Closing up': SIG1
2021-05-26 10:22:54,762-0400 DEBUG otopi.context context.dumpEnvironment:765 ENVIRONMENT DUMP - BEGIN
2021-05-26 10:22:54,763-0400 DEBUG otopi.context context.dumpEnvironment:775 ENV BASE/error=bool:'True'
2021-05-26 10:22:54,763-0400 DEBUG otopi.context context.dumpEnvironment:775 ENV BASE/exceptionInfo=list:'[(<class 'RuntimeError'>, RuntimeError('SIG1',), <traceback object at 0x7fa8fc1ca1c8>)]'
thank you
3 years, 6 months
Adding a Ubuntu Host's NFS share to oVirt
by David White
Hello,
Is it possible to use Ubuntu to share an NFS export with oVirt?I'm trying to setup a Backup Domain for my environment.
I got to the point of actually adding the new Storage Domain.
When I click OK, I see the storage domain appear momentarily before disappearing, at which point I get a message about oVirt not being able to obtain a lock.
It appears I'm running into the issue described in this thread: https://lists.ovirt.org/archives/list/users@ovirt.org/thread/BNVX.... Although the actual export is ext4, not xfs.
From what I'm reading on that thread and elsewhere, it sounds like this problem is a result of SELinux not being present, is that correct?
Is my only option here to install an OS that supports SELinux?
Sent with ProtonMail Secure Email.
3 years, 6 months
Can't remove snapshot
by David Johnson
Hi all,
I patched one of my Windows VM's yesterday. I started by snapshotting the
VM, then applied the Windows update. Now that the patch has been tested, I
want to remove the snapshot. I get this message:
Error while executing action:
win-sql-2019:
- Cannot remove Snapshot. The following attached disks are in ILLEGAL
status: win-2019-tmpl_Disk1 - please remove them and try again.
Does anyone have any thoughts how to recover from this? I really don't want
to keep this snapshot hanging around.
Thanks in advance,
*David Johnson*
3 years, 6 months
Cannot add new host to Ovirt
by pablo@miami.edu
I have an Ovirt installation with a hosted engine and three hosts. Using Gluster as the storage for the VMs.
Ovirt: 4.4.6.7
Hosts: CentOS Stream release 8 (updated to latest)
So far so good.
I am trying to add a new host to the cluster with the same OS and hardware as the others and I cannot get it to install, it gives me all kind of errors and it will not install.
I reinstalled the OS and I am getting the same results.
DNS is configured properly and working ok for all hosts.
I can see this error in this log file ansible-runner-service.log:
2021-05-30 15:46:38,319 - runner_service.services.hosts - ERROR - SSH - NOAUTH:SSH auth error - passwordless ssh not configured for 'ovirt4'
(sshd is configured exactly the same as all other hosts and I can login to this host without a password from the ovirt hosted engine)
I see these errors in the log engine.log:
2021-05-30 16:22:35,166Z ERROR [org.ovirt.vdsm.jsonrpc.client.reactors.Reactor] (SSL Stomp Reactor) [] Unable to process messages PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
2021-05-30 16:22:35,175Z ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-32) [] EVENT_ID: VDS_BROKER_COMMAND_FAILURE(10,802), VDSM ovirt4.net.miami.edu command Get Host Capabilities failed: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
2021-05-30 16:22:35,175Z ERROR [org.ovirt.engine.core.vdsbroker.monitoring.HostMonitoring] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-32) [] Unable to RefreshCapabilities: VDSNetworkException: VDSGenericException: VDSNetworkException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
2021-05-30 16:22:35,597Z ERROR [org.ovirt.engine.core.bll.gluster.GlusterSyncJob] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-23) [] Error while refreshing server data for cluster 'Default' from database: null
I tried reinstalling, rebooting, put it in maintenance, enroll the certificate, check for Upgrades, rebooted multiple times both the hosts and the ovirt engine:
nothing works.
What am I doing wrong?
Thank you in advance for your help.
3 years, 6 months
OSError: [Errno 24] Too many open files
by lejeczek
Hi guys
I'm trying to install HE on a KVM host and installer cannot
get pass this:
[ ERROR ] OSError: [Errno 24] Too many open files
[ ERROR ] fatal: [localhost]: FAILED! => {"msg": "Unexpected
failure during module execution.", "stdout": ""}
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Sync on
engine machine]
[ ERROR ] fatal: [localhost]: FAILED! => {"msg": "Using a
SSH password instead of a key is not possible because Host
Key checking is enabled and sshpass does not support this.
Please add this host's fingerprint to your known_hosts file
to manage this host."}
[ ERROR ] Failed to execute stage 'Closing up': Failed
executing ansible-playbook
[ INFO ] Stage: Clean up
KVM itself should be satisfying in terms of requirements, as
a setup of a HE version from 'master' repo worked previously
on it.
Any ideas & thoughts shared, on what is that cryptic error
message saying and how to troubleshoot I'll very much
appreciate.
many thanks, L.
3 years, 6 months
Error while deploying Hyperconverged oVirt 4.3.3(el7) + GlusterFS
by techbreak@icloud.com
As title, we want to use 3 hosts with hyperconverged solution of oVirt. We installed oVirt and Gluster as reported in the guide https://www.ovirt.org/documentation/gluster-hyperconverged/chap-Deploying...
When we try to deploy, we get some errors which we cannot figure out.
==============================================================
gdeploy creates as configuration rules:
hc_nodes:
hosts:
virtnodetest-0-0:
gluster_infra_volume_groups:
- vgname: gluster_vg_sdb
pvname: /dev/sdb
gluster_infra_mount_devices:
- path: /gluster_bricks/engine
lvname: gluster_lv_engine
vgname: gluster_vg_sdb
- path: /gluster_bricks/isostorage
lvname: gluster_lv_isostorage
vgname: gluster_vg_sdb
- path: /gluster_bricks/vmstorage
lvname: gluster_lv_vmstorage
vgname: gluster_vg_sdb
gluster_infra_thick_lvs:
- vgname: gluster_vg_sdb
lvname: gluster_lv_engine
size: 150G
gluster_infra_thinpools:
- vgname: gluster_vg_sdb
thinpoolname: gluster_thinpool_gluster_vg_sdb
poolmetadatasize: 16G
gluster_infra_lv_logicalvols:
- vgname: gluster_vg_sdb
thinpool: gluster_thinpool_gluster_vg_sdb
lvname: gluster_lv_isostorage
lvsize: 250G
- vgname: gluster_vg_sdb
thinpool: gluster_thinpool_gluster_vg_sdb
lvname: gluster_lv_vmstorage
lvsize: 3500G
virtnodetest-0-1:
gluster_infra_volume_groups:
- vgname: gluster_vg_sdb
pvname: /dev/sdb
gluster_infra_mount_devices:
- path: /gluster_bricks/engine
lvname: gluster_lv_engine
vgname: gluster_vg_sdb
- path: /gluster_bricks/isostorage
lvname: gluster_lv_isostorage
vgname: gluster_vg_sdb
- path: /gluster_bricks/vmstorage
lvname: gluster_lv_vmstorage
vgname: gluster_vg_sdb
gluster_infra_thick_lvs:
- vgname: gluster_vg_sdb
lvname: gluster_lv_engine
size: 150G
gluster_infra_thinpools:
- vgname: gluster_vg_sdb
thinpoolname: gluster_thinpool_gluster_vg_sdb
poolmetadatasize: 16G
gluster_infra_lv_logicalvols:
- vgname: gluster_vg_sdb
thinpool: gluster_thinpool_gluster_vg_sdb
lvname: gluster_lv_isostorage
lvsize: 250G
- vgname: gluster_vg_sdb
thinpool: gluster_thinpool_gluster_vg_sdb
lvname: gluster_lv_vmstorage
lvsize: 3500G
virtnodetest-0-2:
gluster_infra_volume_groups:
- vgname: gluster_vg_sdb
pvname: /dev/sdb
gluster_infra_mount_devices:
- path: /gluster_bricks/engine
lvname: gluster_lv_engine
vgname: gluster_vg_sdb
- path: /gluster_bricks/isostorage
lvname: gluster_lv_isostorage
vgname: gluster_vg_sdb
- path: /gluster_bricks/vmstorage
lvname: gluster_lv_vmstorage
vgname: gluster_vg_sdb
gluster_infra_thick_lvs:
- vgname: gluster_vg_sdb
lvname: gluster_lv_engine
size: 150G
gluster_infra_thinpools:
- vgname: gluster_vg_sdb
thinpoolname: gluster_thinpool_gluster_vg_sdb
poolmetadatasize: 16G
gluster_infra_lv_logicalvols:
- vgname: gluster_vg_sdb
thinpool: gluster_thinpool_gluster_vg_sdb
lvname: gluster_lv_isostorage
lvsize: 250G
- vgname: gluster_vg_sdb
thinpool: gluster_thinpool_gluster_vg_sdb
lvname: gluster_lv_vmstorage
lvsize: 3500G
vars:
gluster_infra_disktype: JBOD
gluster_set_selinux_labels: true
gluster_infra_fw_ports:
- 2049/tcp
- 54321/tcp
- 5900/tcp
- 5900-6923/tcp
- 5666/tcp
- 16514/tcp
gluster_infra_fw_permanent: true
gluster_infra_fw_state: enabled
gluster_infra_fw_zone: public
gluster_infra_fw_services:
- glusterfs
gluster_features_force_varlogsizecheck: false
cluster_nodes:
- virtnodetest-0-0
- virtnodetest-0-1
- virtnodetest-0-2
gluster_features_hci_cluster: '{{ cluster_nodes }}'
gluster_features_hci_volumes:
- volname: engine
brick: /gluster_bricks/engine/engine
arbiter: 0
- volname: isostorage
brick: /gluster_bricks/isostorage/isostorage
arbiter: 0
- volname: vmstorage
brick: /gluster_bricks/vmstorage/vmstorage
arbiter: 0
=========================================================================
The system returns this error:
PLAY [Setup backend] ***********************************************************
TASK [Gathering Facts] *********************************************************
ok: [virtnodetest-0-2]
ok: [virtnodetest-0-1]
ok: [virtnodetest-0-0]
TASK [Check if valid hostnames are provided] ***********************************
changed: [virtnodetest-0-1] => (item=virtnodetest-0-1)
changed: [virtnodetest-0-1] => (item=virtnodetest-0-0)
changed: [virtnodetest-0-1] => (item=virtnodetest-0-2)
TASK [Check if provided hostnames are valid] ***********************************
ok: [virtnodetest-0-1] => {
"changed": false,
"msg": "All assertions passed"
}
ok: [virtnodetest-0-0] => {
"changed": false,
"msg": "All assertions passed"
}
ok: [virtnodetest-0-2] => {
"changed": false,
"msg": "All assertions passed"
}
TASK [Check if /var/log has enough disk space] *********************************
skipping: [virtnodetest-0-1]
skipping: [virtnodetest-0-0]
skipping: [virtnodetest-0-2]
TASK [Check if the /var is greater than 15G] ***********************************
skipping: [virtnodetest-0-1]
skipping: [virtnodetest-0-0]
skipping: [virtnodetest-0-2]
TASK [Check if disks have logical block size of 512B] **************************
skipping: [virtnodetest-0-1] => (item={u'vgname': u'gluster_vg_sdb', u'pvname': u'/dev/sdb'})
skipping: [virtnodetest-0-0] => (item={u'vgname': u'gluster_vg_sdb', u'pvname': u'/dev/sdb'})
skipping: [virtnodetest-0-2] => (item={u'vgname': u'gluster_vg_sdb', u'pvname': u'/dev/sdb'})
TASK [Check if logical block size is 512 bytes] ********************************
skipping: [virtnodetest-0-1] => (item=Logical Block Size)
skipping: [virtnodetest-0-0] => (item=Logical Block Size)
skipping: [virtnodetest-0-2] => (item=Logical Block Size)
TASK [Get logical block size of VDO devices] ***********************************
skipping: [virtnodetest-0-1]
skipping: [virtnodetest-0-0]
skipping: [virtnodetest-0-2]
TASK [Check if logical block size is 512 bytes for VDO devices] ****************
skipping: [virtnodetest-0-1]
skipping: [virtnodetest-0-0]
skipping: [virtnodetest-0-2]
TASK [gluster.infra/roles/firewall_config : Start firewalld if not already started] ***
ok: [virtnodetest-0-2]
ok: [virtnodetest-0-1]
ok: [virtnodetest-0-0]
TASK [gluster.infra/roles/firewall_config : check if required variables are set] ***
skipping: [virtnodetest-0-1]
skipping: [virtnodetest-0-0]
skipping: [virtnodetest-0-2]
TASK [gluster.infra/roles/firewall_config : Open/Close firewalld ports] ********
ok: [virtnodetest-0-2] => (item=2049/tcp)
ok: [virtnodetest-0-1] => (item=2049/tcp)
ok: [virtnodetest-0-0] => (item=2049/tcp)
ok: [virtnodetest-0-2] => (item=54321/tcp)
ok: [virtnodetest-0-0] => (item=54321/tcp)
ok: [virtnodetest-0-1] => (item=54321/tcp)
ok: [virtnodetest-0-2] => (item=5900/tcp)
ok: [virtnodetest-0-1] => (item=5900/tcp)
ok: [virtnodetest-0-0] => (item=5900/tcp)
ok: [virtnodetest-0-2] => (item=5900-6923/tcp)
ok: [virtnodetest-0-0] => (item=5900-6923/tcp)
ok: [virtnodetest-0-1] => (item=5900-6923/tcp)
ok: [virtnodetest-0-2] => (item=5666/tcp)
ok: [virtnodetest-0-1] => (item=5666/tcp)
ok: [virtnodetest-0-0] => (item=5666/tcp)
ok: [virtnodetest-0-2] => (item=16514/tcp)
ok: [virtnodetest-0-1] => (item=16514/tcp)
ok: [virtnodetest-0-0] => (item=16514/tcp)
TASK [gluster.infra/roles/firewall_config : Add/Delete services to firewalld rules] ***
ok: [virtnodetest-0-1] => (item=glusterfs)
ok: [virtnodetest-0-0] => (item=glusterfs)
ok: [virtnodetest-0-2] => (item=glusterfs)
TASK [gluster.infra/roles/backend_setup : Check if vdsm-python package is installed or not] ***
changed: [virtnodetest-0-2]
changed: [virtnodetest-0-1]
changed: [virtnodetest-0-0]
TASK [gluster.infra/roles/backend_setup : Remove the existing LVM filter] ******
skipping: [virtnodetest-0-1]
skipping: [virtnodetest-0-0]
skipping: [virtnodetest-0-2]
TASK [gluster.infra/roles/backend_setup : Check that the multipath.conf exists] ***
skipping: [virtnodetest-0-1]
skipping: [virtnodetest-0-0]
skipping: [virtnodetest-0-2]
TASK [gluster.infra/roles/backend_setup : Ensure that multipathd services is enabled if not] ***
skipping: [virtnodetest-0-1]
skipping: [virtnodetest-0-0]
skipping: [virtnodetest-0-2]
TASK [gluster.infra/roles/backend_setup : Ensure that multipathd services is running] ***
skipping: [virtnodetest-0-1]
skipping: [virtnodetest-0-0]
skipping: [virtnodetest-0-2]
TASK [gluster.infra/roles/backend_setup : Create /etc/multipath/conf.d if doesn't exists] ***
skipping: [virtnodetest-0-1]
skipping: [virtnodetest-0-0]
skipping: [virtnodetest-0-2]
TASK [gluster.infra/roles/backend_setup : Get the UUID of the devices] *********
skipping: [virtnodetest-0-1]
skipping: [virtnodetest-0-0]
skipping: [virtnodetest-0-2]
TASK [gluster.infra/roles/backend_setup : Check that the blacklist.conf exists] ***
skipping: [virtnodetest-0-1]
skipping: [virtnodetest-0-0]
skipping: [virtnodetest-0-2]
TASK [gluster.infra/roles/backend_setup : Create blacklist template content] ***
skipping: [virtnodetest-0-1]
skipping: [virtnodetest-0-0]
skipping: [virtnodetest-0-2]
TASK [gluster.infra/roles/backend_setup : Add wwid to blacklist in blacklist.conf file] ***
skipping: [virtnodetest-0-1]
skipping: [virtnodetest-0-0]
skipping: [virtnodetest-0-2]
TASK [gluster.infra/roles/backend_setup : Reload multipathd] *******************
skipping: [virtnodetest-0-1]
skipping: [virtnodetest-0-0]
skipping: [virtnodetest-0-2]
TASK [gluster.infra/roles/backend_setup : Gather facts to determine the OS distribution] ***
ok: [virtnodetest-0-1]
ok: [virtnodetest-0-0]
ok: [virtnodetest-0-2]
TASK [gluster.infra/roles/backend_setup : Change to Install lvm tools for debian systems.] ***
skipping: [virtnodetest-0-1]
skipping: [virtnodetest-0-0]
skipping: [virtnodetest-0-2]
TASK [gluster.infra/roles/backend_setup : Change to Install lvm tools for RHEL systems.] ***
ok: [virtnodetest-0-2]
ok: [virtnodetest-0-1]
ok: [virtnodetest-0-0]
TASK [gluster.infra/roles/backend_setup : Install python-yaml package for Debian systems] ***
skipping: [virtnodetest-0-1]
skipping: [virtnodetest-0-0]
skipping: [virtnodetest-0-2]
TASK [gluster.infra/roles/backend_setup : Initialize vdo_devs array] ***********
ok: [virtnodetest-0-1]
ok: [virtnodetest-0-0]
ok: [virtnodetest-0-2]
TASK [gluster.infra/roles/backend_setup : Record VDO devices (if any)] *********
skipping: [virtnodetest-0-1] => (item={u'vgname': u'gluster_vg_sdb', u'pvname': u'/dev/sdb'})
skipping: [virtnodetest-0-0] => (item={u'vgname': u'gluster_vg_sdb', u'pvname': u'/dev/sdb'})
skipping: [virtnodetest-0-2] => (item={u'vgname': u'gluster_vg_sdb', u'pvname': u'/dev/sdb'})
TASK [gluster.infra/roles/backend_setup : Configure lvm thinpool extend threshold] ***
skipping: [virtnodetest-0-1]
skipping: [virtnodetest-0-0]
skipping: [virtnodetest-0-2]
TASK [gluster.infra/roles/backend_setup : Configure lvm thinpool extend percentage] ***
skipping: [virtnodetest-0-1]
skipping: [virtnodetest-0-0]
skipping: [virtnodetest-0-2]
TASK [gluster.infra/roles/backend_setup : Check if vdo block device exists] ****
skipping: [virtnodetest-0-1]
skipping: [virtnodetest-0-0]
skipping: [virtnodetest-0-2]
TASK [gluster.infra/roles/backend_setup : Record for missing devices for phase 2] ***
skipping: [virtnodetest-0-1]
skipping: [virtnodetest-0-0]
skipping: [virtnodetest-0-2]
TASK [gluster.infra/roles/backend_setup : set fact if it will at least install 1 vdo device] ***
TASK [gluster.infra/roles/backend_setup : Install VDO dependencies] ************
skipping: [virtnodetest-0-1]
skipping: [virtnodetest-0-0]
skipping: [virtnodetest-0-2]
TASK [gluster.infra/roles/backend_setup : set fact about vdo installed deps] ***
skipping: [virtnodetest-0-1]
skipping: [virtnodetest-0-0]
skipping: [virtnodetest-0-2]
TASK [gluster.infra/roles/backend_setup : Enable and start vdo service] ********
skipping: [virtnodetest-0-1]
skipping: [virtnodetest-0-0]
skipping: [virtnodetest-0-2]
TASK [gluster.infra/roles/backend_setup : Create VDO with specified size] ******
skipping: [virtnodetest-0-1]
skipping: [virtnodetest-0-0]
skipping: [virtnodetest-0-2]
TASK [gluster.infra/roles/backend_setup : Set VDO maxDiscardSize as 16M] *******
skipping: [virtnodetest-0-1]
skipping: [virtnodetest-0-0]
skipping: [virtnodetest-0-2]
TASK [gluster.infra/roles/backend_setup : Stop VDO volumes] ********************
skipping: [virtnodetest-0-1]
skipping: [virtnodetest-0-0]
skipping: [virtnodetest-0-2]
TASK [gluster.infra/roles/backend_setup : Start VDO volumes] *******************
skipping: [virtnodetest-0-1]
skipping: [virtnodetest-0-0]
skipping: [virtnodetest-0-2]
TASK [gluster.infra/roles/backend_setup : Check if valid disktype is provided] ***
skipping: [virtnodetest-0-1]
skipping: [virtnodetest-0-0]
skipping: [virtnodetest-0-2]
TASK [gluster.infra/roles/backend_setup : Set PV data alignment for JBOD] ******
ok: [virtnodetest-0-1]
ok: [virtnodetest-0-0]
ok: [virtnodetest-0-2]
TASK [gluster.infra/roles/backend_setup : Set PV data alignment for RAID] ******
skipping: [virtnodetest-0-1]
skipping: [virtnodetest-0-0]
skipping: [virtnodetest-0-2]
TASK [gluster.infra/roles/backend_setup : Set VG physical extent size for RAID] ***
skipping: [virtnodetest-0-1]
skipping: [virtnodetest-0-0]
skipping: [virtnodetest-0-2]
TASK [gluster.infra/roles/backend_setup : include_tasks] ***********************
included: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/get_vg_groupings.yml for virtnodetest-0-1, virtnodetest-0-0, virtnodetest-0-2
TASK [gluster.infra/roles/backend_setup : Group devices by volume group name, including existing devices] ***
ok: [virtnodetest-0-1]
ok: [virtnodetest-0-0]
ok: [virtnodetest-0-2]
TASK [gluster.infra/roles/backend_setup : Check if vg block device exists] *****
changed: [virtnodetest-0-0] => (item={u'key': u'gluster_vg_sdb', u'value': [{u'vgname': u'gluster_vg_sdb', u'pvname': u'/dev/sdb'}]})
changed: [virtnodetest-0-1] => (item={u'key': u'gluster_vg_sdb', u'value': [{u'vgname': u'gluster_vg_sdb', u'pvname': u'/dev/sdb'}]})
changed: [virtnodetest-0-2] => (item={u'key': u'gluster_vg_sdb', u'value': [{u'vgname': u'gluster_vg_sdb', u'pvname': u'/dev/sdb'}]})
TASK [gluster.infra/roles/backend_setup : Filter none-existing devices] ********
ok: [virtnodetest-0-1]
ok: [virtnodetest-0-0]
ok: [virtnodetest-0-2]
TASK [gluster.infra/roles/backend_setup : Record for missing devices for phase 2] ***
ok: [virtnodetest-0-1] => (item={u'stderr_lines': [], u'ansible_loop_var': u'item', u'end': u'2021-05-13 12:18:33.575598', u'stderr': u'', u'stdout': u'0', u'changed': True, u'failed': False, u'delta': u'0:00:00.009901', u'cmd': u' test -b /dev/sdb && echo "1" || echo "0"; \n', u'item': {u'value': [{u'vgname': u'gluster_vg_sdb', u'pvname': u'/dev/sdb'}], u'key': u'gluster_vg_sdb'}, u'rc': 0, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': True, u'strip_empty_ends': True, u'_raw_params': u' test -b /dev/sdb && echo "1" || echo "0"; \n', u'removes': None, u'argv': None, u'creates': None, u'chdir': None, u'stdin_add_newline': True, u'stdin': None}}, u'stdout_lines': [u'0'], u'start': u'2021-05-13 12:18:33.565697'})
ok: [virtnodetest-0-0] => (item={u'stderr_lines': [], u'ansible_loop_var': u'item', u'end': u'2021-05-13 10:52:56.886693', u'stderr': u'', u'stdout': u'0', u'changed': True, u'failed': False, u'delta': u'0:00:00.008123', u'cmd': u' test -b /dev/sdb && echo "1" || echo "0"; \n', u'item': {u'value': [{u'vgname': u'gluster_vg_sdb', u'pvname': u'/dev/sdb'}], u'key': u'gluster_vg_sdb'}, u'rc': 0, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': True, u'strip_empty_ends': True, u'_raw_params': u' test -b /dev/sdb && echo "1" || echo "0"; \n', u'removes': None, u'argv': None, u'creates': None, u'chdir': None, u'stdin_add_newline': True, u'stdin': None}}, u'stdout_lines': [u'0'], u'start': u'2021-05-13 10:52:56.878570'})
ok: [virtnodetest-0-2] => (item={u'stderr_lines': [], u'ansible_loop_var': u'item', u'end': u'2021-05-13 12:25:24.420710', u'stderr': u'', u'stdout': u'0', u'changed': True, u'failed': False, u'delta': u'0:00:00.007307', u'cmd': u' test -b /dev/sdb && echo "1" || echo "0"; \n', u'item': {u'value': [{u'vgname': u'gluster_vg_sdb', u'pvname': u'/dev/sdb'}], u'key': u'gluster_vg_sdb'}, u'rc': 0, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': True, u'strip_empty_ends': True, u'_raw_params': u' test -b /dev/sdb && echo "1" || echo "0"; \n', u'removes': None, u'argv': None, u'creates': None, u'chdir': None, u'stdin_add_newline': True, u'stdin': None}}, u'stdout_lines': [u'0'], u'start': u'2021-05-13 12:25:24.413403'})
TASK [gluster.infra/roles/backend_setup : Create volume groups] ****************
skipping: [virtnodetest-0-1] => (item={u'key': u'gluster_vg_sdb', u'value': []})
skipping: [virtnodetest-0-0] => (item={u'key': u'gluster_vg_sdb', u'value': []})
skipping: [virtnodetest-0-2] => (item={u'key': u'gluster_vg_sdb', u'value': []})
TASK [gluster.infra/roles/backend_setup : update LVM fact's] *******************
skipping: [virtnodetest-0-1]
skipping: [virtnodetest-0-0]
skipping: [virtnodetest-0-2]
TASK [gluster.infra/roles/backend_setup : Check if thick-lv block devices exists] ***
changed: [virtnodetest-0-0] => (item={u'lvname': u'gluster_lv_engine', u'vgname': u'gluster_vg_sdb', u'size': u'150G'})
changed: [virtnodetest-0-1] => (item={u'lvname': u'gluster_lv_engine', u'vgname': u'gluster_vg_sdb', u'size': u'150G'})
changed: [virtnodetest-0-2] => (item={u'lvname': u'gluster_lv_engine', u'vgname': u'gluster_vg_sdb', u'size': u'150G'})
TASK [gluster.infra/roles/backend_setup : Record for missing devices for phase 2] ***
skipping: [virtnodetest-0-1] => (item={u'stderr_lines': [], u'ansible_loop_var': u'item', u'end': u'2021-05-13 12:18:37.528159', u'stderr': u'', u'stdout': u'1', u'changed': True, u'failed': False, u'delta': u'0:00:00.010032', u'cmd': u' echo "1" \n', u'item': {u'lvname': u'gluster_lv_engine', u'vgname': u'gluster_vg_sdb', u'size': u'150G'}, u'rc': 0, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': True, u'strip_empty_ends': True, u'_raw_params': u' echo "1" \n', u'removes': None, u'argv': None, u'creates': None, u'chdir': None, u'stdin_add_newline': True, u'stdin': None}}, u'stdout_lines': [u'1'], u'start': u'2021-05-13 12:18:37.518127'})
skipping: [virtnodetest-0-0] => (item={u'stderr_lines': [], u'ansible_loop_var': u'item', u'end': u'2021-05-13 10:53:00.863436', u'stderr': u'', u'stdout': u'1', u'changed': True, u'failed': False, u'delta': u'0:00:00.007459', u'cmd': u' echo "1" \n', u'item': {u'lvname': u'gluster_lv_engine', u'vgname': u'gluster_vg_sdb', u'size': u'150G'}, u'rc': 0, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': True, u'strip_empty_ends': True, u'_raw_params': u' echo "1" \n', u'removes': None, u'argv': None, u'creates': None, u'chdir': None, u'stdin_add_newline': True, u'stdin': None}}, u'stdout_lines': [u'1'], u'start': u'2021-05-13 10:53:00.855977'})
skipping: [virtnodetest-0-2] => (item={u'stderr_lines': [], u'ansible_loop_var': u'item', u'end': u'2021-05-13 12:25:28.261106', u'stderr': u'', u'stdout': u'1', u'changed': True, u'failed': False, u'delta': u'0:00:00.007818', u'cmd': u' echo "1" \n', u'item': {u'lvname': u'gluster_lv_engine', u'vgname': u'gluster_vg_sdb', u'size': u'150G'}, u'rc': 0, u'invocation': {u'module_args': {u'warn': True, u'executable': None, u'_uses_shell': True, u'strip_empty_ends': True, u'_raw_params': u' echo "1" \n', u'removes': None, u'argv': None, u'creates': None, u'chdir': None, u'stdin_add_newline': True, u'stdin': None}}, u'stdout_lines': [u'1'], u'start': u'2021-05-13 12:25:28.253288'})
TASK [gluster.infra/roles/backend_setup : include_tasks] ***********************
included: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/get_vg_groupings.yml for virtnodetest-0-1, virtnodetest-0-0, virtnodetest-0-2
TASK [gluster.infra/roles/backend_setup : Group devices by volume group name, including existing devices] ***
ok: [virtnodetest-0-1]
ok: [virtnodetest-0-0]
ok: [virtnodetest-0-2]
TASK [gluster.infra/roles/backend_setup : Check if vg block device exists] *****
TASK [gluster.infra/roles/backend_setup : Filter none-existing devices] ********
ok: [virtnodetest-0-1]
ok: [virtnodetest-0-0]
ok: [virtnodetest-0-2]
TASK [gluster.infra/roles/backend_setup : Make sure thick pvs exists in volume group] ***
TASK [gluster.infra/roles/backend_setup : update LVM fact's] *******************
skipping: [virtnodetest-0-1]
skipping: [virtnodetest-0-0]
skipping: [virtnodetest-0-2]
TASK [gluster.infra/roles/backend_setup : Create thick logical volume] *********
failed: [virtnodetest-0-1] (item={u'lvname': u'gluster_lv_engine', u'vgname': u'gluster_vg_sdb', u'size': u'150G'}) => {"ansible_index_var": "index", "ansible_loop_var": "item", "changed": false, "err": " WARNING: Device for PV gx6iUE-369Z-3FDP-aRUQ-Wur0-1Xhf-v4g79j not found or rejected by a filter.\n Volume group \"gluster_vg_sdb\" not found\n Cannot process volume group gluster_vg_sdb\n", "index": 0, "item": {"lvname": "gluster_lv_engine", "size": "150G", "vgname": "gluster_vg_sdb"}, "msg": "Volume group gluster_vg_sdb does not exist.", "rc": 5}
failed: [virtnodetest-0-0] (item={u'lvname': u'gluster_lv_engine', u'vgname': u'gluster_vg_sdb', u'size': u'150G'}) => {"ansible_index_var": "index", "ansible_loop_var": "item", "changed": false, "err": " Volume group \"gluster_vg_sdb\" not found\n Cannot process volume group gluster_vg_sdb\n", "index": 0, "item": {"lvname": "gluster_lv_engine", "size": "150G", "vgname": "gluster_vg_sdb"}, "msg": "Volume group gluster_vg_sdb does not exist.", "rc": 5}
failed: [virtnodetest-0-2] (item={u'lvname': u'gluster_lv_engine', u'vgname': u'gluster_vg_sdb', u'size': u'150G'}) => {"ansible_index_var": "index", "ansible_loop_var": "item", "changed": false, "err": " Volume group \"gluster_vg_sdb\" not found\n Cannot process volume group gluster_vg_sdb\n", "index": 0, "item": {"lvname": "gluster_lv_engine", "size": "150G", "vgname": "gluster_vg_sdb"}, "msg": "Volume group gluster_vg_sdb does not exist.", "rc": 5}
NO MORE HOSTS LEFT *************************************************************
NO MORE HOSTS LEFT *************************************************************
PLAY RECAP *********************************************************************
virtnodetest-0-0 : ok=19 changed=3 unreachable=0 failed=1 skipped=41 rescued=0 ignored=0
virtnodetest-0-1 : ok=20 changed=4 unreachable=0 failed=1 skipped=41 rescued=0 ignored=0
virtnodetest-0-2 : ok=19 changed=3 unreachable=0 failed=1 skipped=41 rescued=0 ignored=0
Please check /var/log/cockpit/ovirt-dashboard/gluster-deployment.log for more informations.
======================================================================
How can we resolve this issue?
3 years, 6 months
Creating Snapshots failed
by jb
Hello Community,
since I upgrade our cluster to ovirt 4.4.6.8-1.el8 I'm not able anymore
to create snapshots on certain VMs. For example I have two debian 10
VMs, from one I can make a snapshot, and from other one not.
Both are up to date and uses the same qemu-guest-agent versions.
I tried to create snapshots over API and on web gui, both gives the same
result.
In the attachment you found a snipped from the engine.log.
Any help would be wonderful!
Regards,
Jonathan
3 years, 6 months