Restart oVirt-Engine
by Jeremey Wise
How ,without reboot of hosting system, do I restart the oVirt engine?
# I tried below but do not seem to effect the virtual machine
[root@thor iso]# systemctl restart ov
ovirt-ha-agent.service ovirt-imageio.service
ovn-controller.service ovs-delete-transient-ports.service
ovirt-ha-broker.service ovirt-vmconsole-host-sshd.service
ovsdb-server.service ovs-vswitchd.service
[root@thor iso]#
# You cannot restart the VM " HostedEngine " as it responses:
Error while executing action:
HostedEngine:
- Cannot restart VM. This VM is not managed by the engine.
Reason is I had to do some work on a node. Reboot it.. it is back up..
network is all fine.. Cockpit working fine... and gluster fine.. But
oVirt-Engine refuses to accept the node is up.
--
p <jeremey.wise(a)gmail.com>enguinpages
3 months, 2 weeks
SPM and Task error ...
by Enrico
Hi all,
my ovirt cluster has got 3 Hypervisors runnig Centos 7.5.1804 vdsm is
4.20.39.1-1.el7,
ovirt engine is 4.2.4.5-1.el7, the storage systems are HP MSA P2000 and
2050 (fibre channel).
I need to stop one of the hypervisors for maintenance but this system is
the storage pool manager.
For this reason I decided to manually activate SPM in one of the other
nodes but this operation is not
successful.
In the ovirt engine (engine.log) the error is this:
2019-07-25 12:39:16,744+02 INFO
[org.ovirt.engine.core.bll.storage.pool.ForceSelectSPMCommand] (default
task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] Running command:
ForceSelectSPMCommand internal: false. Entities affected : ID:
81c9bd3c-ae0a-467f-bf7f-63ab30cd8d9e Type: VDSAction group
MANIPULATE_HOST with role type ADMIN
2019-07-25 12:39:16,745+02 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SpmStopOnIrsVDSCommand]
(default task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] START,
SpmStopOnIrsVDSCommand(
SpmStopOnIrsVDSCommandParameters:{storagePoolId='18d57688-6ed4-43b8-bd7c-0665b55950b7',
ignoreFailoverLimit='false'}), log id: 37bf4639
2019-07-25 12:39:16,747+02 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.ResetIrsVDSCommand] (default
task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] START,
ResetIrsVDSCommand(
ResetIrsVDSCommandParameters:{storagePoolId='18d57688-6ed4-43b8-bd7c-0665b55950b7',
ignoreFailoverLimit='false',
vdsId='751f3e99-b95e-4c31-bc38-77f5661a0bdc',
ignoreStopFailed='false'}), log id: 2522686f
2019-07-25 12:39:16,749+02 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStopVDSCommand] (default
task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] START,
SpmStopVDSCommand(HostName = infn-vm05.management,
SpmStopVDSCommandParameters:{hostId='751f3e99-b95e-4c31-bc38-77f5661a0bdc',
storagePoolId='18d57688-6ed4-43b8-bd7c-0665b55950b7'}), log id: 1810fd8b
2019-07-25 12:39:16,758+02 *ERROR*
[org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStopVDSCommand] (default
task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] SpmStopVDSCommand::Not
stopping SPM on vds 'infn-vm05.management', pool id
'18d57688-6ed4-43b8-bd7c-0665b55950b7' as there are uncleared tasks
'Task 'fdcf4d1b-82fe-49a6-b233-323ebe568f8e', status 'running''
2019-07-25 12:39:16,758+02 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStopVDSCommand] (default
task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] FINISH,
SpmStopVDSCommand, log id: 1810fd8b
2019-07-25 12:39:16,758+02 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.ResetIrsVDSCommand] (default
task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] FINISH,
ResetIrsVDSCommand, log id: 2522686f
2019-07-25 12:39:16,758+02 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SpmStopOnIrsVDSCommand]
(default task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] FINISH,
SpmStopOnIrsVDSCommand, log id: 37bf4639
2019-07-25 12:39:16,760+02 *ERROR*
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] EVENT_ID:
USER_FORCE_SELECTED_SPM_STOP_FAILED(4,096), Failed to force select
infn-vm07.management as the SPM due to a failure to stop the current SPM.
while in the hypervisor (SPM) vdsm.log:
2019-07-25 12:39:16,744+02 INFO
[org.ovirt.engine.core.bll.storage.pool.ForceSelectSPMCommand] (default
task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] Running command:
ForceSelectSPMCommand internal: false. Entities affected : ID:
81c9bd3c-ae0a-467f-bf7f-63ab30cd8d9e Type: VDSAction group
MANIPULATE_HOST with role type ADMIN
2019-07-25 12:39:16,745+02 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SpmStopOnIrsVDSCommand]
(default task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] START,
SpmStopOnIrsVDSCommand(
SpmStopOnIrsVDSCommandParameters:{storagePoolId='18d57688-6ed4-43b8-bd7c-0665b55950b7',
ignoreFailoverLimit='false'}), log id: 37bf4639
2019-07-25 12:39:16,747+02 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.ResetIrsVDSCommand] (default
task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] START,
ResetIrsVDSCommand(
ResetIrsVDSCommandParameters:{storagePoolId='18d57688-6ed4-43b8-bd7c-0665b55950b7',
ignoreFailoverLimit='false',
vdsId='751f3e99-b95e-4c31-bc38-77f5661a0bdc',
ignoreStopFailed='false'}), log id: 2522686f
2019-07-25 12:39:16,749+02 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStopVDSCommand] (default
task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] START,
SpmStopVDSCommand(HostName = infn-vm05.management,
SpmStopVDSCommandParameters:{hostId='751f3e99-b95e-4c31-bc38-77f5661a0bdc',
storagePoolId='18d57688-6ed4-43b8-bd7c-0665b55950b7'}), log id: 1810fd8b
2019-07-25 12:39:16,758+02 *ERROR*
[org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStopVDSCommand] (default
task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] SpmStopVDSCommand::Not
stopping SPM on vds 'infn-vm05.management', pool id
'18d57688-6ed4-43b8-bd7c-0665b55950b7' as there are uncleared tasks
'Task 'fdcf4d1b-82fe-49a6-b233-323ebe568f8e', status 'running''
2019-07-25 12:39:16,758+02 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStopVDSCommand] (default
task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] FINISH,
SpmStopVDSCommand, log id: 1810fd8b
2019-07-25 12:39:16,758+02 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.ResetIrsVDSCommand] (default
task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] FINISH,
ResetIrsVDSCommand, log id: 2522686f
2019-07-25 12:39:16,758+02 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SpmStopOnIrsVDSCommand]
(default task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] FINISH,
SpmStopOnIrsVDSCommand, log id: 37bf4639
2019-07-25 12:39:16,760+02 *ERROR*
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-30) [7c374384-f884-4dc9-87d0-7af27dce706b] EVENT_ID:
USER_FORCE_SELECTED_SPM_STOP_FAILED(4,096), Failed to force select
infn-vm07.management as the SPM due to a failure to stop the current SPM.
2019-07-25 12:39:18,660+02 INFO
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) [] Task id
'fdcf4d1b-82fe-49a6-b233-323ebe568f8e' has passed pre-polling period
time and should be polled. Pre-polling period is 60000 millis.
2019-07-25 12:39:18,660+02 INFO
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) [] Task id
'fdcf4d1b-82fe-49a6-b233-323ebe568f8e' has passed pre-polling period
time and should be polled. Pre-polling period is 60000 millis.
2019-07-25 12:39:18,750+02 INFO
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) [] Task id
'fdcf4d1b-82fe-49a6-b233-323ebe568f8e' has passed pre-polling period
time and should be polled. Pre-polling period is 60000 millis.
2019-07-25 12:39:18,750+02 *ERROR*
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) []
BaseAsyncTask::logEndTaskFailure: Task
'fdcf4d1b-82fe-49a6-b233-323ebe568f8e' (Parent Command 'Unknown',
Parameters Type
'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters') ended
with failure:
2019-07-25 12:39:18,750+02 INFO
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) []
SPMAsyncTask::ClearAsyncTask: Attempting to clear task
'fdcf4d1b-82fe-49a6-b233-323ebe568f8e'
2019-07-25 12:39:18,751+02 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) [] START,
SPMClearTaskVDSCommand(
SPMTaskGuidBaseVDSCommandParameters:{storagePoolId='18d57688-6ed4-43b8-bd7c-0665b55950b7',
ignoreFailoverLimit='false',
taskId='fdcf4d1b-82fe-49a6-b233-323ebe568f8e'}), log id: 34ae2b2f
2019-07-25 12:39:18,752+02 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) [] START,
HSMClearTaskVDSCommand(HostName = infn-vm05.management,
HSMTaskGuidBaseVDSCommandParameters:{hostId='751f3e99-b95e-4c31-bc38-77f5661a0bdc',
taskId='fdcf4d1b-82fe-49a6-b233-323ebe568f8e'}), log id: d3a78ad
2019-07-25 12:39:18,757+02 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) [] FINISH,
HSMClearTaskVDSCommand, log id: d3a78ad
2019-07-25 12:39:18,757+02 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) [] FINISH,
SPMClearTaskVDSCommand, log id: 34ae2b2f
2019-07-25 12:39:18,757+02 INFO
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) []
SPMAsyncTask::ClearAsyncTask: At time of attempt to clear task
'fdcf4d1b-82fe-49a6-b233-323ebe568f8e' the response code was
'TaskStateError' and message was 'Operation is not allowed in this task
state: ("can't clean in state running",)'. Task will not be cleaned
2019-07-25 12:39:18,757+02 INFO
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) []
BaseAsyncTask::onTaskEndSuccess: Task
'fdcf4d1b-82fe-49a6-b233-323ebe568f8e' (Parent Command 'Unknown',
Parameters Type
'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters') ended
successfully.
2019-07-25 12:39:18,757+02 INFO
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) []
SPMAsyncTask::ClearAsyncTask: Attempting to clear task
'fdcf4d1b-82fe-49a6-b233-323ebe568f8e'
2019-07-25 12:39:18,758+02 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) [] START,
SPMClearTaskVDSCommand(
SPMTaskGuidBaseVDSCommandParameters:{storagePoolId='18d57688-6ed4-43b8-bd7c-0665b55950b7',
ignoreFailoverLimit='false',
taskId='fdcf4d1b-82fe-49a6-b233-323ebe568f8e'}), log id: 42de0c2b
2019-07-25 12:39:18,759+02 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) [] START,
HSMClearTaskVDSCommand(HostName = infn-vm05.management,
HSMTaskGuidBaseVDSCommandParameters:{hostId='751f3e99-b95e-4c31-bc38-77f5661a0bdc',
taskId='fdcf4d1b-82fe-49a6-b233-323ebe568f8e'}), log id: 4895c79c
2019-07-25 12:39:18,764+02 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) [] FINISH,
HSMClearTaskVDSCommand, log id: 4895c79c
2019-07-25 12:39:18,764+02 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) [] FINISH,
SPMClearTaskVDSCommand, log id: 42de0c2b
2019-07-25 12:39:18,764+02 INFO
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) []
SPMAsyncTask::ClearAsyncTask: At time of attempt to clear task
'fdcf4d1b-82fe-49a6-b233-323ebe568f8e' the response code was
'TaskStateError' and message was 'Operation is not allowed in this task
state: ("can't clean in state running",)'. Task will not be cleaned
2019-07-25 12:39:18,764+02 INFO
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) [] Task id
'fdcf4d1b-82fe-49a6-b233-323ebe568f8e' has passed pre-polling period
time and should be polled. Pre-polling period is 60000 millis.
2019-07-25 12:39:18,764+02 INFO
[org.ovirt.engine.core.bll.tasks.AsyncTaskManager]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) [] Cleaning zombie
tasks: Clearing async task 'Unknown' that started at 'Fri May 03
14:48:50 CEST 2019'
2019-07-25 12:39:18,764+02 INFO
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) []
SPMAsyncTask::ClearAsyncTask: Attempting to clear task
'fdcf4d1b-82fe-49a6-b233-323ebe568f8e'
2019-07-25 12:39:18,765+02 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) [] START,
SPMClearTaskVDSCommand(
SPMTaskGuidBaseVDSCommandParameters:{storagePoolId='18d57688-6ed4-43b8-bd7c-0665b55950b7',
ignoreFailoverLimit='false',
taskId='fdcf4d1b-82fe-49a6-b233-323ebe568f8e'}), log id: da77af2
2019-07-25 12:39:18,766+02 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) [] START,
HSMClearTaskVDSCommand(HostName = infn-vm05.management,
HSMTaskGuidBaseVDSCommandParameters:{hostId='751f3e99-b95e-4c31-bc38-77f5661a0bdc',
taskId='fdcf4d1b-82fe-49a6-b233-323ebe568f8e'}), log id: 530694fb
2019-07-25 12:39:18,771+02 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) [] FINISH,
HSMClearTaskVDSCommand, log id: 530694fb
2019-07-25 12:39:18,771+02 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) [] FINISH,
SPMClearTaskVDSCommand, log id: da77af2
2019-07-25 12:39:18,771+02 INFO
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(EE-ManagedThreadFactory-engineScheduled-Thread-67) []
SPMAsyncTask::ClearAsyncTask: At time of attempt to clear task
'fdcf4d1b-82fe-49a6-b233-323ebe568f8e' the response code was
'TaskStateError' and message was 'Operation is not allowed in this task
state: ("can't clean in state running",)'. Task will not be cleaned
there is some relation between this error and a task that has remained
hanging, from SPM server:
# vdsm-client Task getInfo taskID=fdcf4d1b-82fe-49a6-b233-323ebe568f8e
{
"verb": "prepareMerge",
"id": "fdcf4d1b-82fe-49a6-b233-323ebe568f8e"
}
# vdsm-client Task getStatus taskID=fdcf4d1b-82fe-49a6-b233-323ebe568f8e
{
"message": "running job 1 of 1",
"code": 0,
"taskID": "fdcf4d1b-82fe-49a6-b233-323ebe568f8e",
"taskResult": "",
"taskState": "running"
}
How can I solve this problem ?
Thanks a lot for your help !!
Best Regards
Enrico
--
_______________________________________________________________________
Enrico Becchetti Servizio di Calcolo e Reti
Istituto Nazionale di Fisica Nucleare - Sezione di Perugia
Via Pascoli,c/o Dipartimento di Fisica 06123 Perugia (ITALY)
Phone:+39 075 5852777 Mail: Enrico.Becchetti<at>pg.infn.it
_______________________________________________________________________
5 months
How to re-enroll (or renew) host certificates for a single-host hosted-engine deployment?
by Derek Atkins
Hi,
I've got a single-host hosted-engine deployment that I originally
installed with 4.0 and have upgraded over the years to 4.3.10. I and some
of my users have upgraded remote-viewer and now I get an error when I try
to view the console of my VMs:
(remote-viewer:8252): Spice-WARNING **: 11:30:41.806:
../subprojects/spice-common/common/ssl_verify.c:477:openssl_verify: Error
in server certificate verification: CA signature digest algorithm too weak
(num=68:depth0:/O=<My Org Name>/CN=<Host's Name>)
I am 99.99% sure this is because the old certs use SHA1.
I reran engine-setup on the engine and it asked me if I wanted to renew
the PKI, and I answered yes. This replaced many[1] of the certificates in
/etc/pki/ovirt-engine/certs on the engine, but it did not update the
Host's certificate.
All the documentation I've seen says that to refresh this certificate I
need to put the host into maintenance mode and then re-enroll.. However I
cannot do that, because this is a single-host system so I cannot put the
host in local mode -- there is no place to migrate the VMs (let alone the
Engine VM).
So.... Is there a command-line way to re-enroll manually and update the
host certs? Or some other way to get all the leftover certs renewed?
Thanks,
-derek
[1] Not only did it not update the Host's cert, it did not update any of
the vmconsole-proxy certs, nor the certs in /etc/pki/ovirt-vmconsole/, and
obviously nothing in /etc/pki/ on the host itself.
--
Derek Atkins 617-623-3745
derek(a)ihtfp.com www.ihtfp.com
Computer and Internet Security Consultant
9 months, 4 weeks
Changing disk QoS causes segfault with IO-Threads enabled (oVirt 4.3.0.4-1.el7)
by jloh@squiz.net
We recently upgraded to 4.3.0 and have found that when changing disk QoS settings on VMs whilst IO-Threads is enabled causes them to segfault and the VM to reboot. We've been able to replicate this across several VMs. VMs with IO-Threads disabled/turned off do not segfault when changing the QoS.
Mar 1 11:49:06 srvXX kernel: IO iothread1[30468]: segfault at fffffffffffffff8 ip 0000557649f2bd24 sp 00007f80de832f60 error 5 in qemu-kvm[5576498dd000+a03000]
Mar 1 11:49:06 srvXX abrt-hook-ccpp: invalid number 'iothread1'
Mar 1 11:49:11 srvXX libvirtd: 2019-03-01 00:49:11.116+0000: 13365: error : qemuMonitorIORead:609 : Unable to read from monitor: Connection reset by peer
Happy to supply some more logs to someone if they'll help but just wondering whether anyone else has experienced this or knows of a current fix other than turning io-threads off.
Cheers.
1 year
Deploy oVirt Engine fail behind proxy
by Matteo Bonardi
Hi,
I am trying to deploy the ovirt engine following self-hosted engine installation procedure on documentation.
Deployment servers are behind a proxy and I have set it in environment and in yum.conf before run deploy.
Deploy fails because ovirt engine vm cannot resolve AppStream repository url:
[ INFO ] TASK [ovirt.engine-setup : Install oVirt Engine package]
[ ERROR ] fatal: [localhost -> ovirt-manager.mydomain]: FAILED! => {"changed": false, "msg": "Failed to download metadata for repo 'AppStream': Cannot prepare internal mirrorlist: Curl error (6): Couldn't resolve host name for http://mirrorlist.centos.org/?release=8&arch=x86_64&repo=AppStream&infra=... [Could not resolve host: mirrorlist.centos.org]", "rc": 1, "results": []}
[ ERROR ] Failed to execute stage 'Closing up': Failed executing ansible-playbook
[ INFO ] Stage: Clean up
[ INFO ] Cleaning temporary resources
[ INFO ] TASK [ovirt.hosted_engine_setup : Execute just a specific set of steps]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Force facts gathering]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Fetch logs from the engine VM]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Set destination directory path]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Create destination directory]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : include_tasks]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Find the local appliance image]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Set local_vm_disk_path]
[ INFO ] skipping: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Give the vm time to flush dirty buffers]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Copy engine logs]
[ INFO ] TASK [ovirt.hosted_engine_setup : include_tasks]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Remove local vm dir]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Remove temporary entry in /etc/hosts for the local VM]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Clean local storage pools]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Destroy local storage-pool {{ he_local_vm_dir | basename }}]
[ INFO ] TASK [ovirt.hosted_engine_setup : Undefine local storage-pool {{ he_local_vm_dir | basename }}]
[ INFO ] TASK [ovirt.hosted_engine_setup : Destroy local storage-pool {{ local_vm_disk_path.split('/')[5] }}]
[ INFO ] TASK [ovirt.hosted_engine_setup : Undefine local storage-pool {{ local_vm_disk_path.split('/')[5] }}]
[ INFO ] Generating answer file '/var/lib/ovirt-hosted-engine-setup/answers/answers-20201109165237.conf'
[ INFO ] Stage: Pre-termination
[ INFO ] Stage: Termination
[ ERROR ] Hosted Engine deployment failed: please check the logs for the issue, fix accordingly or re-deploy from scratch.
Log file is located at /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20201109164244-b3e8sd.log
How I can set proxy for the engine vm?
Ovirt version:
[root@myhost ~]# rpm -qa | grep ovirt-engine-appliance
ovirt-engine-appliance-4.4-20200916125954.1.el8.x86_64
[root@myhost ~]# rpm -qa | grep ovirt-hosted-engine-setup
ovirt-hosted-engine-setup-2.4.6-1.el8.noarch
OS version:
[root@myhost ~]# cat /etc/centos-release
CentOS Linux release 8.2.2004 (Core)
[root@myhost ~]# uname -a
Linux myhost.mydomain 4.18.0-193.28.1.el8_2.x86_64 #1 SMP Thu Oct 22 00:20:22 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
Thanks for the help.
Regards,
Matteo
1 year, 1 month
Unable to install oVirt on RHEL7.5
by SS00514758@techmahindra.com
Hi All,
I am unable to install oVirt on RHEL7.5, to install it I am taking reference of below link,
https://www.ovirt.org/documentation/install-guide/chap-Installing_oVirt.html
But though it is not working for me, couple of dependencies is not getting installed, and because of this I am not able to run the ovirt-engine, below are the depencies packages that unable to install,
Error: Package: collectd-write_http-5.8.0-6.1.el7.x86_64 (@ovirt-4.2-centos-opstools)
Requires: collectd(x86-64) = 5.8.0-6.1.el7
Removing: collectd-5.8.0-6.1.el7.x86_64 (@ovirt-4.2-centos-opstools)
collectd(x86-64) = 5.8.0-6.1.el7
Updated By: collectd-5.8.1-1.el7.x86_64 (epel)
collectd(x86-64) = 5.8.1-1.el7
Available: collectd-5.7.2-1.el7.x86_64 (ovirt-4.2-centos-opstools)
collectd(x86-64) = 5.7.2-1.el7
Available: collectd-5.7.2-3.el7.x86_64 (ovirt-4.2-centos-opstools)
collectd(x86-64) = 5.7.2-3.el7
Available: collectd-5.8.0-2.el7.x86_64 (ovirt-4.2-centos-opstools)
collectd(x86-64) = 5.8.0-2.el7
Available: collectd-5.8.0-3.el7.x86_64 (ovirt-4.2-centos-opstools)
collectd(x86-64) = 5.8.0-3.el7
Available: collectd-5.8.0-5.el7.x86_64 (ovirt-4.2-centos-opstools)
collectd(x86-64) = 5.8.0-5.el7
Help me to install this.
Looking forward to resolve this issue.
Regards
Sumit Sahay
1 year, 3 months
Re: Failed to synchronize networks of Provider ovirt-provider-ovn
by Mail SET Inc. Group
Yes, i use same manual to change WebUI SSL.
ovirt-ca-file= is a same SSL file which use WebUI.
Yes, i restart ovirt-provider-ovn, i restart engine, i restart all what i can restart. Nothing...
> 12 сент. 2018 г., в 16:11, Dominik Holler <dholler(a)redhat.com> написал(а):
>
> On Wed, 12 Sep 2018 14:23:54 +0300
> "Mail SET Inc. Group" <mail(a)set-pro.net> wrote:
>
>> Ok!
>
> Not exactly, please use users(a)ovirt.org for such questions.
> Other should benefit from this questions, too.
> Please write the next mail to users(a)ovirt.org and keep me in CC.
>
>> What i did:
>>
>> 1) install oVirt «from box» (4.2.5.2-1.el7);
>> 2) generate own ssl for my engine using my FreeIPA CA, Install it and
>
> What means "Install it"? You can use the doc from the following link
> https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.2/...
>
> Ensure that ovirt-ca-file= in
> /etc/ovirt-provider-ovn/conf.d/10-setup-ovirt-provider-ovn.conf
> points to the correct file and ovirt-provider-ovn is restarted.
>
>> get tis issue;
>>
>>
>> [root@engine ~]# tail -n 50 /var/log/ovirt-provider-ovn.log
>> 2018-09-12 14:10:23,828 root [SSL: CERTIFICATE_VERIFY_FAILED]
>> certificate verify failed (_ssl.c:579) Traceback (most recent call
>> last): File "/usr/share/ovirt-provider-ovn/handlers/base_handler.py",
>> line 133, in _handle_request method, path_parts, content
>> File "/usr/share/ovirt-provider-ovn/handlers/selecting_handler.py",
>> line 175, in handle_request return
>> self.call_response_handler(handler, content, parameters) File
>> "/usr/share/ovirt-provider-ovn/handlers/keystone.py", line 33, in
>> call_response_handler return response_handler(content, parameters)
>> File "/usr/share/ovirt-provider-ovn/handlers/keystone_responses.py",
>> line 62, in post_tokens user_password=user_password) File
>> "/usr/share/ovirt-provider-ovn/auth/plugin_facade.py", line 26, in
>> create_token return auth.core.plugin.create_token(user_at_domain,
>> user_password) File
>> "/usr/share/ovirt-provider-ovn/auth/plugins/ovirt/plugin.py", line
>> 48, in create_token timeout=self._timeout()) File
>> "/usr/share/ovirt-provider-ovn/auth/plugins/ovirt/sso.py", line 75,
>> in create_token username, password, engine_url, ca_file, timeout)
>> File "/usr/share/ovirt-provider-ovn/auth/plugins/ovirt/sso.py", line
>> 91, in _get_sso_token timeout=timeout File
>> "/usr/share/ovirt-provider-ovn/auth/plugins/ovirt/sso.py", line 54,
>> in wrapper response = func(*args, **kwargs) File
>> "/usr/share/ovirt-provider-ovn/auth/plugins/ovirt/sso.py", line 47,
>> in wrapper raise BadGateway(e) BadGateway: [SSL:
>> CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:579)
>>
>>
>> [root@engine ~]# tail -n 20 /var/log/ovirt-engine/engine.log
>> 2018-09-12 14:10:23,773+03 INFO
>> [org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand]
>> (EE-ManagedThreadFactory-engineScheduled-Thread-47) [316db685] Lock
>> Acquired to object
>> 'EngineLock:{exclusiveLocks='[14e4fb72-9764-4757-b37d-4d487995571a=PROVIDER]',
>> sharedLocks=''}' 2018-09-12 14:10:23,778+03 INFO
>> [org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand]
>> (EE-ManagedThreadFactory-engineScheduled-Thread-47) [316db685]
>> Running command: SyncNetworkProviderCommand internal: true.
>> 2018-09-12 14:10:23,836+03 ERROR
>> [org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand]
>> (EE-ManagedThreadFactory-engineScheduled-Thread-47) [316db685]
>> Command
>> 'org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand'
>> failed: EngineException: (Failed with error Bad Gateway and code
>> 5050) 2018-09-12 14:10:23,837+03 INFO
>> [org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand]
>> (EE-ManagedThreadFactory-engineScheduled-Thread-47) [316db685] Lock
>> freed to object
>> 'EngineLock:{exclusiveLocks='[14e4fb72-9764-4757-b37d-4d487995571a=PROVIDER]',
>> sharedLocks=''}' 2018-09-12 14:14:12,477+03 INFO
>> [org.ovirt.engine.core.sso.utils.AuthenticationUtils] (default
>> task-6) [] User admin@internal successfully logged in with scopes:
>> ovirt-app-admin ovirt-app-api ovirt-app-portal
>> ovirt-ext=auth:sequence-priority=~ ovirt-ext=revoke:revoke-all
>> ovirt-ext=token-info:authz-search
>> ovirt-ext=token-info:public-authz-search
>> ovirt-ext=token-info:validate ovirt-ext=token:password-access
>> 2018-09-12 14:14:12,587+03 INFO
>> [org.ovirt.engine.core.bll.aaa.CreateUserSessionCommand] (default
>> task-6) [1bf1b763] Running command: CreateUserSessionCommand
>> internal: false. 2018-09-12 14:14:12,628+03 INFO
>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
>> (default task-6) [1bf1b763] EVENT_ID: USER_VDC_LOGIN(30), User
>> admin@internal-authz connecting from '10.0.3.61' using session
>> 's8jAm7BUJGlicthm6yZBA3CUM8QpRdtwFaK3M/IppfhB3fHFB9gmNf0cAlbl1xIhcJ2WX+ww7e71Ri+MxJSsIg=='
>> logged in. 2018-09-12 14:14:30,972+03 INFO
>> [org.ovirt.engine.core.bll.provider.ImportProviderCertificateCommand]
>> (default task-6) [ee3cc8a7-4485-4fdf-a0c2-e9d67b5cfcd3] Running
>> command: ImportProviderCertificateCommand internal: false. Entities
>> affected : ID: aaa00000-0000-0000-0000-123456789aaa Type:
>> SystemAction group CREATE_STORAGE_POOL with role type ADMIN
>> 2018-09-12 14:14:30,982+03 INFO
>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
>> (default task-6) [ee3cc8a7-4485-4fdf-a0c2-e9d67b5cfcd3] EVENT_ID:
>> PROVIDER_CERTIFICATE_IMPORTED(213), Certificate for provider
>> ovirt-provider-ovn was imported. (User: admin@internal-authz)
>> 2018-09-12 14:14:31,006+03 INFO
>> [org.ovirt.engine.core.bll.provider.TestProviderConnectivityCommand]
>> (default task-6) [a48d94ab-b0b2-42a2-a667-0525b4c652ea] Running
>> command: TestProviderConnectivityCommand internal: false. Entities
>> affected : ID: aaa00000-0000-0000-0000-123456789aaa Type:
>> SystemAction group CREATE_STORAGE_POOL with role type ADMIN
>> 2018-09-12 14:14:31,058+03 ERROR
>> [org.ovirt.engine.core.bll.provider.TestProviderConnectivityCommand]
>> (default task-6) [a48d94ab-b0b2-42a2-a667-0525b4c652ea] Command
>> 'org.ovirt.engine.core.bll.provider.TestProviderConnectivityCommand'
>> failed: EngineException: (Failed with error Bad Gateway and code
>> 5050) 2018-09-12 14:15:10,954+03 INFO
>> [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService]
>> (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread
>> pool 'default' is using 0 threads out of 1, 5 threads waiting for
>> tasks. 2018-09-12 14:15:10,954+03 INFO
>> [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService]
>> (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread
>> pool 'engine' is using 0 threads out of 500, 16 threads waiting for
>> tasks and 0 tasks in queue. 2018-09-12 14:15:10,954+03 INFO
>> [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService]
>> (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread
>> pool 'engineScheduled' is using 0 threads out of 100, 100 threads
>> waiting for tasks. 2018-09-12 14:15:10,954+03 INFO
>> [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService]
>> (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread
>> pool 'engineThreadMonitoring' is using 1 threads out of 1, 0 threads
>> waiting for tasks. 2018-09-12 14:15:10,954+03 INFO
>> [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService]
>> (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread
>> pool 'hostUpdatesChecker' is using 0 threads out of 5, 2 threads
>> waiting for tasks. 2018-09-12 14:15:23,843+03 INFO
>> [org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand]
>> (EE-ManagedThreadFactory-engineScheduled-Thread-61) [2455041f] Lock
>> Acquired to object
>> 'EngineLock:{exclusiveLocks='[14e4fb72-9764-4757-b37d-4d487995571a=PROVIDER]',
>> sharedLocks=''}' 2018-09-12 14:15:23,849+03 INFO
>> [org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand]
>> (EE-ManagedThreadFactory-engineScheduled-Thread-61) [2455041f]
>> Running command: SyncNetworkProviderCommand internal: true.
>> 2018-09-12 14:15:23,900+03 ERROR
>> [org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand]
>> (EE-ManagedThreadFactory-engineScheduled-Thread-61) [2455041f]
>> Command
>> 'org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand'
>> failed: EngineException: (Failed with error Bad Gateway and code
>> 5050) 2018-09-12 14:15:23,901+03 INFO
>> [org.ovirt.engine.core.bll.provider.network.SyncNetworkProviderCommand]
>> (EE-ManagedThreadFactory-engineScheduled-Thread-61) [2455041f] Lock
>> freed to object
>> 'EngineLock:{exclusiveLocks='[14e4fb72-9764-4757-b37d-4d487995571a=PROVIDER]',
>> sharedLocks=''}'
>>
>>
>> [root@engine ~]#
>> cat /etc/ovirt-provider-ovn/conf.d/10-setup-ovirt-provider-ovn.conf #
>> This file is automatically generated by engine-setup. Please do not
>> edit manually [OVN REMOTE] ovn-remote=ssl:127.0.0.1:6641
>> [SSL]
>> https-enabled=true
>> ssl-cacert-file=/etc/pki/ovirt-engine/ca.pem
>> ssl-cert-file=/etc/pki/ovirt-engine/certs/ovirt-provider-ovn.cer
>> ssl-key-file=/etc/pki/ovirt-engine/keys/ovirt-provider-ovn.key.nopass
>> [OVIRT]
>> ovirt-sso-client-secret=Ms7Gw9qNT6IkXu7oA54tDmxaZDIukABV
>> ovirt-host=https://engine.set.local:443
>> ovirt-sso-client-id=ovirt-provider-ovn
>> ovirt-ca-file=/etc/pki/ovirt-engine/apache-ca.pem
>> [PROVIDER]
>> provider-host=engine.set.local
>>
>>
>>> 12 сент. 2018 г., в 13:59, Dominik Holler <dholler(a)redhat.com>
>>> написал(а):
>>>
>>> On Wed, 12 Sep 2018 13:04:53 +0300
>>> "Mail SET Inc. Group" <mail(a)set-pro.net> wrote:
>>>
>>>> Hello Dominik!
>>>> I have a same issue with OVN provider and SSL
>>>> https://www.mail-archive.com/users@ovirt.org/msg47020.html
>>>> <https://www.mail-archive.com/users@ovirt.org/msg47020.html> But
>>>> certificate changes not helps to resolve it. Maybe you can help me
>>>> with this?
>>>
>>> Sure. Can you please share the relevant lines of
>>> ovirt-provider-ovn.log and engine.log, and the information if you
>>> are using the certificates generated by engine-setup with
>>> users(a)ovirt.org ? Thanks,
>>> Dominik
>>>
>>
>
>
1 year, 5 months
engine-setup failing on 4.3.2 -> 4.3.3 fails during Engine schema refresh fail
by Edward Berger
I was trying to upgrade a hyperconverged oVirt hosted engine and failed in
the engine-setup command with these error and warnings.
...
[ INFO ] Creating/refreshing Engine database schema
[ ERROR ] schema.sh: FATAL: Cannot execute sql command:
--file=/usr/share/ovirt-engine/dbscripts/upgrade/04_03_0830_add_foreign_key_to_image_transfers.sql
[ ERROR ] Failed to execute stage 'Misc configuration': Engine schema
refresh failed
...
[ INFO ] Yum Verify: 16/16: ovirt-engine-tools.noarch 0:4.3.3.5-1.el7 - e
[WARNING] Rollback of DWH database postponed to Stage "Clean up"
[ INFO ] Rolling back database schema
...
[ INFO ] Stage: Pre-termination
[ INFO ] Stage: Termination
[ ERROR ] Execution of setup failed
Attaching engine-setup logfile.
1 year, 9 months
Out-of-sync networks can only be detached
by Sakhi Hadebe
Hi,
I have a 3-node oVirt cluster. I have configured 2 logical networks:
ovirtmgmt and public. Public logical network is attached in only 2 nodes
and failing to attach on the 3rd node with the below error
Invalid operation, out-of-sync network 'public' can only be detached.
Please have been stuck on this for almost the whole day now. How do I fix
this error?
--
Regards,
Sakhi Hadebe
2 years, 1 month
Network Address Change
by Paul.LKW
Hi All:
I just has a case, I need to change the oVirt host and engine IP address
due to data center decommission I checked in the hosted-engine host
there are some files I could change ;
in ovirt-hosted-engine/hosted-engine.conf
ca_subject="O=simple.com, CN=1.2.3.4"
gateway=1.2.3.254
and of course I need to change the ovirtmgmt interface IP too, I think
just change the above line could do the tick, but where could I change
the other host IP in the cluster ?
I think I have to be lost all the host as once changed the hosted-engine
host IP as it is in diff. sub net.
Does there any command line tools could do that or someone has such
experience could share?
Best Regards,
Paul.LKW
2 years, 3 months
VM HostedEngine is down with error
by souvaliotimaria@mail.com
Hello everyone,
I have a replica 2 + arbiter installation and this morning the Hosted Engine gave the following error on the UI and resumed on a different node (node3) than the one it was originally running(node1). (The original node has more memory than the one it ended up, but it had a better memory usage percentage at the time). Also, the only way I discovered the migration had happened and there was an Error in Events, was because I logged in the web interface of ovirt for a routine inspection. Βesides that, everything was working properly and still is.
The error that popped is the following:
VM HostedEngine is down with error. Exit message: internal error: qemu unexpectedly closed the monitor:
2020-09-01T06:49:20.749126Z qemu-kvm: warning: All CPU(s) up to maxcpus should be described in NUMA config, ability to start up with partial NUMA mappings is obsoleted and will be removed in future
2020-09-01T06:49:20.927274Z qemu-kvm: -device virtio-blk-pci,iothread=iothread1,scsi=off,bus=pci.0,addr=0x7,drive=drive-ua-d5de54b6-9f8e-4fba-819b-ebf6780757d2,id=ua-d5de54b6-9f8e-4fba-819b-ebf6780757d2,bootindex=1,write-cache=on: Failed to get "write" lock
Is another process using the image?.
Which from what I could gather concerns the following snippet from the HostedEngine.xml and it's the virtio disk of the Hosted Engine:
<disk type='file' device='disk' snapshot='no'>
<driver name='qemu' type='raw' cache='none' error_policy='stop' io='threads' iothread='1'/>
<source file='/var/run/vdsm/storage/80f6e393-9718-4738-a14a-64cf43c3d8c2/d5de54b6-9f8e-4fba-819b-ebf6780757d2/a48555f4-be23-4467-8a54-400ae7baf9d7'>
<seclabel model='dac' relabel='no'/>
</source>
<target dev='vda' bus='virtio'/>
<serial>d5de54b6-9f8e-4fba-819b-ebf6780757d2</serial>
<alias name='ua-d5de54b6-9f8e-4fba-819b-ebf6780757d2'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
</disk>
I've tried looking into the logs and the sar command but I couldn't find anything to relate with the above errors and determining the reason for it to happen. Is this a Gluster or a QEMU problem?
The Hosted Engine was manually migrated five days before on node1.
Is there a standard practice I could follow to determine what happened and secure my system?
Thank you very much for your time,
Maria Souvalioti
2 years, 11 months
OVS switch type for hosted-engine
by Devin A. Bougie
Is it possible to setup a hosted engine using the OVS switch type instead of Legacy? If it's not possible to start out as OVS, instructions for switching from Legacy to OVS after the fact would be greatly appreciated.
Many thanks,
Devin
2 years, 11 months
USB3 redirection
by Rik Theys
Hi,
I'm trying to assign a USB3 controller to a CentOS 7.4 VM in oVirt 4.1
with USB redirection enabled.
I've created the following file in /etc/ovirt-engine/osinfo.conf.d:
01-usb.properties with content
os.other.devices.usb.controller.value = nec-xhci
and have restarted ovirt-engine.
If I disable USB-support in the web interface for the VM, the xhci
controller is added to the VM (I can see it in the qemu-kvm
commandline), but usb redirection is not available.
If I enable USB-support in the UI, no xhci controller is added (only 4
uhci controllers).
Is there a way to make the controllers for usb redirection xhci controllers?
Regards,
Rik
--
Rik Theys
System Engineer
KU Leuven - Dept. Elektrotechniek (ESAT)
Kasteelpark Arenberg 10 bus 2440 - B-3001 Leuven-Heverlee
+32(0)16/32.11.07
----------------------------------------------------------------
<<Any errors in spelling, tact or fact are transmission errors>>
3 years
OVN routing and firewalling in oVirt
by Gianluca Cecchi
Hello,
how do we manage routing between different OVN networks in oVirt?
And between OVN networks and physical ones?
Based on architecture read here:
http://openvswitch.org/support/dist-docs/ovn-architecture.7.html
I see terms for logical routers and gateway routers respectively but how to
apply to oVirt configuration?
Do I have to choose between setting up a specialized VM or a physical one:
is it applicable/advisable to put on oVirt host itself the gateway
functionality?
Is there any security policy (like security groups in Openstack) to
implement?
Thanks,
Gianluca
3 years
Install hosted-engine - Task Get local VM IP failed
by florentl
Hi all,
I try to install hosted-engine on node : ovirt-node-ng-4.2.3-0.20180518.
Every times I get stuck on :
[ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 50, "changed":
true, "cmd": "virsh -r net-dhcp-leases default | grep -i
00:16:3e:6c:5a:91 | awk '{ print $5 }' | cut -f1 -d'/'", "delta":
"0:00:00.108872", "end": "2018-06-01 11:17:34.421769", "rc": 0, "start":
"2018-06-01 11:17:34.312897", "stderr": "", "stderr_lines": [],
"stdout": "", "stdout_lines": []}
I tried with static IP Address and with DHCP but both failed.
To be more specific, I installed three nodes, deployed glusterfs with
the wizard. I'm in a nested virtualization environment for this lab
(Vmware Esxi Hypervisor).
My node IP is : 192.168.176.40 / and I want the hosted-engine vm has
192.168.176.43.
Thanks,
Florent
3 years, 1 month
Host needs to be reinstalled after configuring power management
by Andrew DeMaria
Hi,
I am running ovirt 4.3 and have found the following action item immediately
after configuring power management for a host:
Host needs to be reinstalled as important configuration changes were
applied on it.
The thing is - I've just freshly installed this host and it seems strange
that I need to reinstall it.
Is there a better way to install a host and configure power management
without having to reinstall it after?
Thanks,
Andrew
3 years, 2 months
Import an exported VM using Ansible
by paolo@airaldi.it
Hello everybody!
I'm trying to automate a copy of a VM from one Datacenter to another using an Ansible.playbook.
I'm able to:
- Create a snapshot of the source VM
- create a clone from the snapshot
- remove the snapshot
- attach an Export Domain
- export the clone to the Export Domain
- remove the clone
- detach the Export domain from the source Datacenter and attach to the destination.
Unfortunately I cannot find a module to:
- import the VM from the Export Domain
- delete the VM image from the Export Domain.
Any hint on how to do that?
Thanks in advance. Cheers.
Paolo
PS: if someone is interested I can share the playbook.
3 years, 2 months
did 4.3.9 reset bug https://bugzilla.redhat.com/show_bug.cgi?id=1590266
by kelley bryan
I am experiencing the error message in the ovirt-hosted-engine-setup-ansible-create_target_vm log
{2020-05-06 14:15:30,024-0500 ERROR ansible failed {'status': 'FAILED', 'ansible_type': 'task', 'ansible_task': u"Fail if Engine IP is different from engine's he_fqdn resolved IP", 'ansible_result': u'type: <type \'dict\'>\nstr: {\'msg\': u"Engine VM IP address is while the engine\'s he_fqdn ovirt1-engine.kelleykars.org resolves to 192.168.122.2. If you are using DHCP, check your DHCP reservation configuration", \'changed\': False, \'_ansible_no_log\': False}', 'task_duration': 1, 'ansible_host': u'localhost', 'ansible_playbook': u'/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml'}}:Q!
The bug 1590266 says it should report the engine VM IP address xxx.xxx.xxx.xxx while the Engines he_fqdn is xxxxxxxxx
I need to see what it thins is wrong as both dig fqdn engine name and dig -x ip return the correct information.
Now this bug looks like it may play but I don't see the failed rediness check in the this log https://access.redhat.com/solutions/4462431
or is it because the vm fails or dies or ???
3 years, 3 months
Lots of storage.MailBox.SpmMailMonitor
by Fabrice Bacchella
My vdsm log files are huge:
-rw-r--r-- 1 vdsm kvm 1.8G Nov 22 11:32 vdsm.log
And this is juste half an hour of logs:
$ head -1 vdsm.log
2018-11-22 11:01:12,132+0100 ERROR (mailbox-spm) [storage.MailBox.SpmMailMonitor] mailbox 2 checksum failed, not clearing mailbox, clearing new mail (data='...lots of data', expected='\xa4\x06\x08\x00') (mailbox:612)
I just upgraded vdsm:
$ rpm -qi vdsm
Name : vdsm
Version : 4.20.43
3 years, 3 months
Snapshot and disk size allocation
by jorgevisentini@gmail.com
Hello everyone.
I would like to know how disk size and snapshot allocation works, because every time I create a new snapshot, it increases 1 GB in the VM's disk size, and when I remove the snap, that space is not returned to Domain Storage.
I'm using the oVirt 4.3.10
How do I reprovision the VM disk?
Thank you all.
3 years, 5 months
oVirt 4.3 DWH with Grafana
by Vrgotic, Marko
Dear oVirt,
We are currently running oVirt 4.3 and upgrade/migration to 4.4 won’t be possible for few more months.
I am looking into guidelines, how to, for setting up Grafana using DataWarehouse as data source.
Did anyone already did this, and would be willing to share the steps?
Kindly awaiting your reply.
-----
kind regards/met vriendelijke groeten
Marko Vrgotic
Sr. System Engineer @ System Administration
ActiveVideo
o: +31 (35) 6774131
m: +31 (65) 5734174
e: m.vrgotic(a)activevideo.com<mailto:m.vrgotic@activevideo.com>
w: www.activevideo.com<http://www.activevideo.com>
ActiveVideo Networks BV. Mediacentrum 3745 Joop van den Endeplein 1.1217 WJ Hilversum, The Netherlands. The information contained in this message may be legally privileged and confidential. It is intended to be read only by the individual or entity to whom it is addressed or by their designee. If the reader of this message is not the intended recipient, you are on notice that any distribution of this message, in any form, is strictly prohibited. If you have received this message in error, please immediately notify the sender and/or ActiveVideo Networks, LLC by telephone at +1 408.931.9200 and delete or destroy any copy of this message.
3 years, 7 months
Template for Ubuntu 18.04 Server Issues
by jeremy_tourville@hotmail.com
I have built a system as a template on oVirt. Specifically, Ubuntu 18.04 server.
I am noticing an issue when creating new vms from that template. I used the check box for "seal template" when creating the template.
When I create a new Ubuntu VM I am getting duplicate IP addresses for all the machines created from the template.
It seems like the checkbox doesn't fully function as intended. I would need to do further manual steps to clear up this issue.
Has anyone else noticed this behavior? Is this expected or have I missed something?
Thanks for your input!
3 years, 7 months
Libgfapi considerations
by Jayme
Are there currently any known issues with using libgfapi in the latest
stable version of ovirt in hci deployments? I have recently enabled it and
have noticed a significant (over 4x) increase in io performance on my vms.
I’m concerned however since it does not seem to be an ovirt default
setting. Is libgfapi considered safe and stable to use in ovirt 4.3 hci?
3 years, 9 months
Re: Parent checkpoint ID does not match the actual leaf checkpoint
by Nir Soffer
On Sun, Jul 19, 2020 at 5:38 PM Łukasz Kołaciński <l.kolacinski(a)storware.eu>
wrote:
> Hello,
> Thanks to previous answers, I was able to make backups. Unfortunately, we
> had some infrastructure issues and after the host reboots new problems
> appeared. I am not able to do any backup using the commands that worked
> yesterday. I looked through the logs and there is something like this:
>
> 2020-07-17 15:06:30,644+02 ERROR
> [org.ovirt.engine.core.bll.StartVmBackupCommand]
> (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-54)
> [944a1447-4ea5-4a1c-b971-0bc612b6e45e] Failed to execute VM backup
> operation 'StartVmBackup': {}:
> org.ovirt.engine.core.common.errors.EngineException: EngineException:
> org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
> VDSGenericException: VDSErrorException: Failed to StartVmBackupVDS, error =
> Checkpoint Error: {'parent_checkpoint_id': None, 'leaf_checkpoint_id':
> 'cd078706-84c0-4370-a6ec-654ccd6a21aa', 'vm_id':
> '116aa6eb-31a1-43db-9b1e-ad6e32fb9260', 'reason': '*Parent checkpoint ID
> does not match the actual leaf checkpoint*'}, code = 1610 (Failed with
> error unexpected and code 16)
>
>
It looks like engine sent:
parent_checkpoint_id: None
This issue was fix in engine few weeks ago.
Which engine and vdsm versions are you testing?
> at
> deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.VdsHandler.handleVdsResult(VdsHandler.java:114)
> at
> deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.VDSBrokerFrontendImpl.runVdsCommand(VDSBrokerFrontendImpl.java:33)
> at
> deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.CommandBase.runVdsCommand(CommandBase.java:2114)
> at
> deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.StartVmBackupCommand.performVmBackupOperation(StartVmBackupCommand.java:368)
> at
> deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.StartVmBackupCommand.runVmBackup(StartVmBackupCommand.java:225)
> at
> deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.StartVmBackupCommand.performNextOperation(StartVmBackupCommand.java:199)
> at
> deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback.childCommandsExecutionEnded(SerialChildCommandsExecutionCallback.java:32)
> at
> deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.ChildCommandsCallbackBase.doPolling(ChildCommandsCallbackBase.java:80)
> at
> deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller.invokeCallbackMethodsImpl(CommandCallbacksPoller.java:175)
> at
> deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller.invokeCallbackMethods(CommandCallbacksPoller.java:109)
> at
> java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
> at
> java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305)
> at
> org.glassfish.javax.enterprise.concurrent//org.glassfish.enterprise.concurrent.internal.ManagedScheduledThreadPoolExecutor$ManagedScheduledFutureTask.access$201(ManagedScheduledThreadPoolExecutor.java:383)
> at
> org.glassfish.javax.enterprise.concurrent//org.glassfish.enterprise.concurrent.internal.ManagedScheduledThreadPoolExecutor$ManagedScheduledFutureTask.run(ManagedScheduledThreadPoolExecutor.java:534)
> at
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
> at
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
> at java.base/java.lang.Thread.run(Thread.java:834)
> at
> org.glassfish.javax.enterprise.concurrent//org.glassfish.enterprise.concurrent.ManagedThreadFactoryImpl$ManagedThread.run(ManagedThreadFactoryImpl.java:250)
>
>
> And the last error is:
>
> 2020-07-17 15:13:45,835+02 ERROR
> [org.ovirt.engine.core.bll.StartVmBackupCommand]
> (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-14)
> [f553c1f2-1c99-4118-9365-ba6b862da936] Failed to execute VM backup
> operation 'GetVmBackupInfo': {}:
> org.ovirt.engine.core.common.errors.EngineException: EngineException:
> org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
> VDSGenericException: VDSErrorException: Failed to GetVmBackupInfoVDS, error
> = No such backup Error: {'vm_id': '116aa6eb-31a1-43db-9b1e-ad6e32fb9260',
> 'backup_id': 'bf1c26f7-c3e5-437c-bb5a-255b8c1b3b73', 'reason': '*VM
> backup not exists: Domain backup job id not found: no domain backup job
> present'*}, code = 1601 (Failed with error unexpected and code 16)
>
>
This is likely a result of the first error. If starting backup failed the
backup entity
is deleted.
> (these errors are from full backup)
>
> Like I said this is very strange because everything was working correctly.
>
>
> Regards
>
> Łukasz Kołaciński
>
> Junior Java Developer
>
> e-mail: l.kolacinski(a)storware.eu
> <m.helbert(a)storware.eu>
>
>
>
>
> *[image: STORWARE]* <http://www.storware.eu/>
>
>
>
> *ul. Leszno 8/44 01-192 Warszawa www.storware.eu
> <https://www.storware.eu/>*
>
> *[image: facebook]* <https://www.facebook.com/storware>
>
> *[image: twitter]* <https://twitter.com/storware>
>
> *[image: linkedin]* <https://www.linkedin.com/company/storware>
>
> *[image: Storware_Stopka_09]*
> <https://www.youtube.com/channel/UCKvLitYPyAplBctXibFWrkw>
>
>
>
> *Storware Spółka z o.o. nr wpisu do ewidencji KRS dla M.St. Warszawa
> 000510131* *, NIP 5213672602.** Wiadomość ta jest przeznaczona jedynie
> dla osoby lub podmiotu, który jest jej adresatem i może zawierać poufne
> i/lub uprzywilejowane informacje. Zakazane jest jakiekolwiek przeglądanie,
> przesyłanie, rozpowszechnianie lub inne wykorzystanie tych informacji lub
> podjęcie jakichkolwiek działań odnośnie tych informacji przez osoby lub
> podmioty inne niż zamierzony adresat. Jeżeli Państwo otrzymali przez
> pomyłkę tę informację prosimy o poinformowanie o tym nadawcy i usunięcie
> tej wiadomości z wszelkich komputerów. **This message is intended only
> for the person or entity to which it is addressed and may contain
> confidential and/or privileged material. Any review, retransmission,
> dissemination or other use of, or taking of any action in reliance upon,
> this information by persons or entities other than the intended recipient
> is prohibited. If you have received this message in error, please contact
> the sender and remove the material from all of your computer systems.*
>
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/S3PLYPOZGT6...
>
3 years, 11 months
poweroff and reboot with ovirt_vm ansible module
by Nathanaël Blanchet
Hello, is there a way to poweroff or reboot (without stopped and running
state) a vm with the ovirt_vm ansible module?
--
Nathanaël Blanchet
Supervision réseau
Pôle Infrastrutures Informatiques
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5
Tél. 33 (0)4 67 54 84 55
Fax 33 (0)4 67 54 84 14
blanchet(a)abes.fr
3 years, 11 months
supervdsm failing during network_caps
by Alan G
Hi,
I have issues with one host where supervdsm is failing in network_caps.
I see the following trace in the log.
MainProcess|jsonrpc/1::ERROR::2020-01-06 03:01:05,558::supervdsm_server::100::SuperVdsm.ServerCallback::(wrapper) Error in network_caps
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/supervdsm_server.py", line 98, in wrapper
res = func(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/vdsm/network/api.py", line 56, in network_caps
return netswitch.configurator.netcaps(compatibility=30600)
File "/usr/lib/python2.7/site-packages/vdsm/network/netswitch/configurator.py", line 317, in netcaps
net_caps = netinfo(compatibility=compatibility)
File "/usr/lib/python2.7/site-packages/vdsm/network/netswitch/configurator.py", line 325, in netinfo
_netinfo = netinfo_get(vdsmnets, compatibility)
File "/usr/lib/python2.7/site-packages/vdsm/network/netinfo/cache.py", line 150, in get
return _stringify_mtus(_get(vdsmnets))
File "/usr/lib/python2.7/site-packages/vdsm/network/netinfo/cache.py", line 59, in _get
ipaddrs = getIpAddrs()
File "/usr/lib/python2.7/site-packages/vdsm/network/netinfo/addresses.py", line 72, in getIpAddrs
for addr in nl_addr.iter_addrs():
File "/usr/lib/python2.7/site-packages/vdsm/network/netlink/addr.py", line 33, in iter_addrs
with _nl_addr_cache(sock) as addr_cache:
File "/usr/lib64/python2.7/contextlib.py", line 17, in __enter__
return self.gen.next()
File "/usr/lib/python2.7/site-packages/vdsm/network/netlink/__init__.py", line 92, in _cache_manager
cache = cache_allocator(sock)
File "/usr/lib/python2.7/site-packages/vdsm/network/netlink/libnl.py", line 469, in rtnl_addr_alloc_cache
raise IOError(-err, nl_geterror(err))
IOError: [Errno 16] Message sequence number mismatch
A restart of supervdsm will resolve the issue for a period, maybe 24 hours, then it will occur again. So I'm thinking it's resource exhaustion or a leak of some kind?
Running 4.2.8.2 with VDSM at 4.20.46.
I've had a look through the bugzilla and can't find an exact match, closest was this one https://bugzilla.redhat.com/show_bug.cgi?id=1666123 which seems to be a RHV only fix.
Thanks,
Alan
4 years, 1 month
The he_fqdn proposed for the engine VM resolves on this host - error?
by lejeczek
hi chaps,
a newcomer here. I use cockpit to deploy hosted engine and I
get this error/warning message:
"The he_fqdn proposed for the engine VM resolves on this host"
I should mention that if I remove the IP to which FQDN
resolves off that iface(plain eth no vlans) then I get this:
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false,
"msg": "The selected network interface is not valid"}
All these errors seem bit too cryptic to me.
Could you shed bit light on what is oVirt saying exactly and
why it's not happy that way?
many thanks, L.
4 years, 2 months
Upgrade from 4.4.3 to 4.4.4 (oVirt Node) - vdsmd.service/start failed with result 'dependency'
by Marco Fais
Hi all,
I have just upgraded one of my oVirt nodes from 4.4.3 to 4.4.4.
After the reboot, the 4.4.4 image is correctly loaded but vdsmd is not
starting due to this error:
vdsmd.service: Job vdsmd.service/start failed with result 'dependency'.
Looks like it has a dependency on mom-vdsm, and this as well has a
dependency issue:
mom-vdsm.service: Job mom-vdsm.service/start failed with result
'dependency'.
After some investigation looks like mom-vdsm has a dependency
on ovsdb-server, and this is the unit creating the problem:
ovs-delete-transient-ports.service: Starting requested but asserts failed.
Assertion failed for Open vSwitch Delete Transient Ports
Failed to start Open vSwitch Database Unit.
Details below:
-- Unit ovsdb-server.service has begun starting up.
Dec 24 12:21:57 LAB-CNVirt-H04.ngv.eircom.net chown[13658]: /usr/bin/chown:
cannot access '/var/run/openvswitch': No such file or directory
Dec 24 12:21:57 LAB-CNVirt-H04.ngv.eircom.net ovs-ctl[13667]:
/etc/openvswitch/conf.db does not exist ... (warning).
Dec 24 12:21:57 LAB-CNVirt-H04.ngv.eircom.net ovsdb-tool[13714]:
ovs|00001|lockfile|WARN|/etc/openvswitch/.conf.db.~lock~: failed to open
lock file: Permission denied
Dec 24 12:21:57 LAB-CNVirt-H04.ngv.eircom.net ovs-ctl[13667]: Creating
empty database /etc/openvswitch/conf.db ovsdb-tool: I/O error:
/etc/openvswitch/conf.db: failed to lock lockfile (Resource temporarily
unavailable)
Dec 24 12:21:57 LAB-CNVirt-H04.ngv.eircom.net ovsdb-tool[13714]:
ovs|00002|lockfile|WARN|/etc/openvswitch/.conf.db.~lock~: failed to lock
file: Resource temporarily unavailable
Dec 24 12:21:57 LAB-CNVirt-H04.ngv.eircom.net ovs-ctl[13667]: [FAILED]
Dec 24 12:21:57 LAB-CNVirt-H04.ngv.eircom.net systemd[1]:
ovsdb-server.service: Control process exited, code=exited status=1
Dec 24 12:21:57 LAB-CNVirt-H04.ngv.eircom.net systemd[1]:
ovsdb-server.service: Failed with result 'exit-code'.
-- Subject: Unit failed
Any suggestions?
Thanks,
Marco
4 years, 2 months
Gluster volume slower then raid1 zpool speed
by Harry O
Hi,
Can anyone help me with the performance on my 3 node gluster on zfs (it is setup with one arbiter)
The performance on the single vm I have on it (with engine) is 50% worse then a single bare metal disk, on the writes.
I have enabled "Optimize for virt store"
I run 1Gbps 1500MTU network, could this be the write performance killer?
Is this to be expected from a 2xHDD zfs raid one on each node, with 3xNode arbiter setup?
Maybe I should move to raid 5 or 6?
Maybe I should add SSD cache to raid1 zfs zpools?
What are your thoughts? What to do for optimize this setup?
I would like to run zfs with gluster and I can deal with a little performance loss, but not that much.
4 years, 2 months
New failure Gluster deploy: Set granual-entry-heal on --> Bricks down
by Charles Lam
Dear friends,
Thanks to Donald and Strahil, my earlier Gluster deploy issue was resolved by disabling multipath on the nvme drives. The Gluster deployment is now failing on the three node hyperconverged oVirt v4.3.3 deployment at:
TASK [gluster.features/roles/gluster_hci : Set granual-entry-heal on] **********
task path: /etc/ansible/roles/gluster.features/roles/gluster_hci/tasks/hci_volumes.yml:67
with:
"stdout": "One or more bricks could be down. Please execute the command
again after bringing all bricks online and finishing any pending heals\nVolume heal
failed."
Specifically:
TASK [gluster.features/roles/gluster_hci : Set granual-entry-heal on] **********
task path: /etc/ansible/roles/gluster.features/roles/gluster_hci/tasks/hci_volumes.yml:67
failed: [fmov1n1.sn.dtcorp.com] (item={'volname': 'engine',
'brick': '/gluster_bricks/engine/engine', 'arbiter': 0}) =>
{"ansible_loop_var": "item", "changed": true,
"cmd": ["gluster", "volume", "heal",
"engine", "granular-entry-heal", "enable"],
"delta": "0:00:10.112451", "end": "2020-12-18
19:50:22.818741", "item": {"arbiter": 0, "brick":
"/gluster_bricks/engine/engine", "volname": "engine"},
"msg": "non-zero return code", "rc": 107, "start":
"2020-12-18 19:50:12.706290", "stderr": "",
"stderr_lines": [], "stdout": "One or more bricks could be down.
Please execute the command again after bringing all bricks online and finishing any
pending heals\nVolume heal failed.", "stdout_lines": ["One or more
bricks could be down. Please execute the command again after bringing all bricks online
and finishing any pending heals", "Volume heal failed."]}
failed: [fmov1n1.sn.dtcorp.com] (item={'volname': 'data', 'brick':
'/gluster_bricks/data/data', 'arbiter': 0}) =>
{"ansible_loop_var": "item", "changed": true,
"cmd": ["gluster", "volume", "heal",
"data", "granular-entry-heal", "enable"], "delta":
"0:00:10.110165", "end": "2020-12-18 19:50:38.260277",
"item": {"arbiter": 0, "brick":
"/gluster_bricks/data/data", "volname": "data"},
"msg": "non-zero return code", "rc": 107, "start":
"2020-12-18 19:50:28.150112", "stderr": "",
"stderr_lines": [], "stdout": "One or more bricks could be down.
Please execute the command again after bringing all bricks online and finishing any
pending heals\nVolume heal failed.", "stdout_lines": ["One or more
bricks could be down. Please execute the command again after bringing all bricks online
and finishing any pending heals", "Volume heal failed."]}
failed: [fmov1n1.sn.dtcorp.com] (item={'volname': 'vmstore',
'brick': '/gluster_bricks/vmstore/vmstore', 'arbiter': 0}) =>
{"ansible_loop_var": "item", "changed": true,
"cmd": ["gluster", "volume", "heal",
"vmstore", "granular-entry-heal", "enable"],
"delta": "0:00:10.113203", "end": "2020-12-18
19:50:53.767864", "item": {"arbiter": 0, "brick":
"/gluster_bricks/vmstore/vmstore", "volname": "vmstore"},
"msg": "non-zero return code", "rc": 107, "start":
"2020-12-18 19:50:43.654661", "stderr": "",
"stderr_lines": [], "stdout": "One or more bricks could be down.
Please execute the command again after bringing all bricks online and finishing any
pending heals\nVolume heal failed.", "stdout_lines": ["One or more
bricks could be down. Please execute the command again after bringing all bricks online
and finishing any pending heals", "Volume heal failed."]}
Any suggestions regarding troubleshooting, insight or recommendations for reading are greatly appreciated. I apologize for all the email and am only creating this as a separate thread as it is a new, presumably unrelated issue. I welcome any recommendations if I can improve my forum etiquette.
Respectfully,
Charles
4 years, 2 months
OVN and change of mgmt network
by Gianluca Cecchi
Hello,
I previously had OVN running on engine (as OVN provider with northd and
northbound and southbound DBs) and hosts (with OVN controller).
After changing mgmt ip of hosts (engine has retained instead the same ip),
I executed again on them the command:
vdsm-tool ovn-config <ip_of_engine> <nel_local_ip_of_host>
Now I think I have to clean up some things, eg:
1) On engine
where I get these lines below
systemctl status ovn-northd.service -l
. . .
Sep 29 14:41:42 ovmgr1 ovsdb-server[940]: ovs|00005|reconnect|ERR|tcp:
10.4.167.40:37272: no response to inactivity probe after 5 seconds,
disconnecting
Oct 03 11:52:00 ovmgr1 ovsdb-server[940]: ovs|00006|reconnect|ERR|tcp:
10.4.167.41:52078: no response to inactivity probe after 5 seconds,
disconnecting
The two IPs are the old ones of two hosts
It seems that a restart of the services has fixed...
Can anyone confirm if I have to do anything else?
2) On hosts (there are 3 hosts with OVN on ip 10.4.192.32/33/34)
where I currently have this output
[root@ov301 ~]# ovs-vsctl show
3a38c5bb-0abf-493d-a2e6-345af8aedfe3
Bridge br-int
fail_mode: secure
Port "ovn-1dce5b-0"
Interface "ovn-1dce5b-0"
type: geneve
options: {csum="true", key=flow, remote_ip="10.4.192.32"}
Port "ovn-ddecf0-0"
Interface "ovn-ddecf0-0"
type: geneve
options: {csum="true", key=flow, remote_ip="10.4.192.33"}
Port "ovn-fd413b-0"
Interface "ovn-fd413b-0"
type: geneve
options: {csum="true", key=flow, remote_ip="10.4.168.74"}
Port br-int
Interface br-int
type: internal
ovs_version: "2.7.2"
[root@ov301 ~]#
The IPs of kind 10.4.192.x are ok.
But there is a left-over of an old host I initially used for tests,
corresponding to 10.4.168.74, that now doesn't exist anymore
How can I clean records for 1) and 2)?
Thanks,
Gianluca
4 years, 2 months
CentOS Stream support
by Michal Skrivanek
Hi all,
we would like to ask about interest in community about oVirt moving to CentOS Stream.
There were some requests before but it’s hard to see how many people would really like to see that.
With CentOS releases lagging behind RHEL for months it’s interesting to consider moving to CentOS Stream as it is much more up to date and allows us to fix bugs faster, with less workarounds and overhead for maintaining old code. E.g. our current integration tests do not really pass on CentOS 8.1 and we can’t really do much about that other than wait for more up to date packages. It would also bring us closer to make oVirt run smoothly on RHEL as that is also much closer to Stream than it is to outdated CentOS.
So..would you like us to support CentOS Stream?
We don’t really have capacity to run 3 different platforms, would you still want oVirt to support CentOS Stream if it means “less support” for regular CentOS?
There are some concerns about Stream being a bit less stable, do you share those concerns?
Thank you for your comments,
michal
4 years, 2 months
[ANN] oVirt 4.4.4 is now generally available
by Sandro Bonazzola
oVirt 4.4.4 is now generally available
The oVirt project is excited to announce the general availability of oVirt
4.4.4 , as of December 21st, 2020.
This release unleashes an altogether more powerful and flexible open source
virtualization solution that encompasses hundreds of individual changes and
a wide range of enhancements across the engine, storage, network, user
interface, and analytics, as compared to oVirt 4.3.
Important notes before you install / upgrade
Please note that oVirt 4.4 only supports clusters and data centers with
compatibility version 4.2 and above. If clusters or data centers are
running with an older compatibility version, you need to upgrade them to at
least 4.2 (4.3 is recommended).
Please note that in RHEL 8 / CentOS 8 several devices that worked on EL7
are no longer supported.
For example, the megaraid_sas driver is removed. If you use Enterprise
Linux 8 hosts you can try to provide the necessary drivers for the
deprecated hardware using the DUD method (See the users’ mailing list
thread on this at
https://lists.ovirt.org/archives/list/users@ovirt.org/thread/NDSVUZSESOXE...
)
Documentation
-
If you want to try oVirt as quickly as possible, follow the instructions
on the Download <https://ovirt.org/download/> page.
-
For complete installation, administration, and usage instructions, see
the oVirt Documentation <https://ovirt.org/documentation/>.
-
For upgrading from a previous version, see the oVirt Upgrade Guide
<https://ovirt.org/documentation/upgrade_guide/>.
-
For a general overview of oVirt, see About oVirt
<https://ovirt.org/community/about.html>.
What’s new in oVirt 4.4.4 Release?
This update is the fourth in a series of stabilization updates to the 4.4
series.
This release is available now on x86_64 architecture for:
-
Red Hat Enterprise Linux 8.3
-
CentOS Linux (or similar) 8.3
-
CentOS Stream (tech preview)
This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
for:
-
Red Hat Enterprise Linux 8.3
-
CentOS Linux (or similar) 8.3
-
oVirt Node (based on CentOS Linux 8.3)
-
CentOS Stream (tech preview)
oVirt Node and Appliance have been updated, including:
-
oVirt 4.4.4: https://www.ovirt.org/release/4.4.4/
-
Ansible 2.9.16:
https://github.com/ansible/ansible/blob/stable-2.9/changelogs/CHANGELOG-v...
-
CentOS Linux 8 (2011):
https://lists.centos.org/pipermail/centos-announce/2020-December/048207.html
-
Advanced Virtualization 8.3
See the release notes [1] for installation instructions and a list of new
features and bugs fixed.
Notes:
-
oVirt Appliance is already available for CentOS Linux 8
-
oVirt Node NG is already available for CentOS Linux 8
Additional resources:
-
Read more about the oVirt 4.4.4 release highlights:
https://www.ovirt.org/release/4.4.4/
-
Get more oVirt project updates on Twitter: https://twitter.com/ovirt
-
Check out the latest project news on the oVirt blog:
https://blogs.ovirt.org/
[1] https://www.ovirt.org/release/4.4.4/
[2] https://resources.ovirt.org/pub/ovirt-4.4/iso/
--
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://www.redhat.com/>
*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.*
4 years, 3 months
encrypted GENEVE traffic
by Pavel Nakonechnyi
Dear oVirt Community,
From my understanding oVirt does not support Open vSwitch IPSEC tunneling for GENEVE traffic (which is described on pages http://docs.openvswitch.org/en/latest/howto/ipsec/ and http://docs.openvswitch.org/en/latest/tutorials/ipsec/).
Are there plans to introduce such support? (or explicitly not to..)
Is it possible to somehow manually configure such tunneling for existing virtual networks? (even in a limited way)
Alternatively, is it possible to deploy oVirt on top of the tunneled (i.e. via VXLAN, IPSec) interfaces? This will allow to encrypt all management traffic.
Such requirement arises when using oVirt deployment on third-party premises with untrusted network.
Thank in advance for any clarifications. :)
--
WBR, Pavel
+32478910884
4 years, 3 months
Re: Constantly XFS in memory corruption inside VMs
by Strahil Nikolov
Damn...
You are using EFI boot. Does this happen only to EFI machines ?
Did you notice if only EL 8 is affected ?
Best Regards,
Strahil Nikolov
В неделя, 29 ноември 2020 г., 19:36:09 Гринуич+2, Vinícius Ferrão <ferrao(a)versatushpc.com.br> написа:
Yes!
I have a live VM right now that will de dead on a reboot:
[root@kontainerscomk ~]# cat /etc/*release
NAME="Red Hat Enterprise Linux"
VERSION="8.3 (Ootpa)"
ID="rhel"
ID_LIKE="fedora"
VERSION_ID="8.3"
PLATFORM_ID="platform:el8"
PRETTY_NAME="Red Hat Enterprise Linux 8.3 (Ootpa)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:redhat:enterprise_linux:8.3:GA"
HOME_URL="https://www.redhat.com/"
BUG_REPORT_URL="https://bugzilla.redhat.com/"
REDHAT_BUGZILLA_PRODUCT="Red Hat Enterprise Linux 8"
REDHAT_BUGZILLA_PRODUCT_VERSION=8.3
REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux"
REDHAT_SUPPORT_PRODUCT_VERSION="8.3"
Red Hat Enterprise Linux release 8.3 (Ootpa)
Red Hat Enterprise Linux release 8.3 (Ootpa)
[root@kontainerscomk ~]# sysctl -a | grep dirty
vm.dirty_background_bytes = 0
vm.dirty_background_ratio = 10
vm.dirty_bytes = 0
vm.dirty_expire_centisecs = 3000
vm.dirty_ratio = 30
vm.dirty_writeback_centisecs = 500
vm.dirtytime_expire_seconds = 43200
[root@kontainerscomk ~]# xfs_db -r /dev/dm-0
xfs_db: /dev/dm-0 is not a valid XFS filesystem (unexpected SB magic number 0xa82a0000)
Use -F to force a read attempt.
[root@kontainerscomk ~]# xfs_db -r /dev/dm-0 -F
xfs_db: /dev/dm-0 is not a valid XFS filesystem (unexpected SB magic number 0xa82a0000)
xfs_db: size check failed
xfs_db: V1 inodes unsupported. Please try an older xfsprogs.
[root@kontainerscomk ~]# cat /etc/fstab
#
# /etc/fstab
# Created by anaconda on Thu Nov 19 22:40:39 2020
#
# Accessible filesystems, by reference, are maintained under '/dev/disk/'.
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info.
#
# After editing this file, run 'systemctl daemon-reload' to update systemd
# units generated from this file.
#
/dev/mapper/rhel-root / xfs defaults 0 0
UUID=ad84d1ea-c9cc-4b22-8338-d1a6b2c7d27e /boot xfs defaults 0 0
UUID=4642-2FF6 /boot/efi vfat umask=0077,shortname=winnt 0 2
/dev/mapper/rhel-swap none swap defaults 0 0
Thanks,
-----Original Message-----
From: Strahil Nikolov <hunter86_bg(a)yahoo.com>
Sent: Sunday, November 29, 2020 2:33 PM
To: Vinícius Ferrão <ferrao(a)versatushpc.com.br>
Cc: users <users(a)ovirt.org>
Subject: Re: [ovirt-users] Re: Constantly XFS in memory corruption inside VMs
Can you check the output on the VM that was affected:
# cat /etc/*release
# sysctl -a | grep dirty
Best Regards,
Strahil Nikolov
В неделя, 29 ноември 2020 г., 19:07:48 Гринуич+2, Vinícius Ferrão via Users <users(a)ovirt.org> написа:
Hi Strahil.
I’m not using barrier options on mount. It’s the default settings from CentOS install.
I have some additional findings, there’s a big number of discarded packages on the switch on the hypervisor interfaces.
Discards are OK as far as I know, I hope TCP handles this and do the proper retransmissions, but I ask if this may be related or not. Our storage is over NFS. My general expertise is with iSCSI and I’ve never seen this kind of issue with iSCSI, not that I’m aware of.
In other clusters, I’ve seen a high number of discards with iSCSI on XenServer 7.2 but there’s no corruption on the VMs there...
Thanks,
Sent from my iPhone
> On 29 Nov 2020, at 04:00, Strahil Nikolov <hunter86_bg(a)yahoo.com> wrote:
>
> Are you using "nobarrier" mount options in the VM ?
>
> If yes, can you try to remove the "nobarrrier" option.
>
>
> Best Regards,
> Strahil Nikolov
>
>
>
>
>
>
> В събота, 28 ноември 2020 г., 19:25:48 Гринуич+2, Vinícius Ferrão <ferrao(a)versatushpc.com.br> написа:
>
>
>
>
>
> Hi Strahil,
>
> I moved a running VM to other host, rebooted and no corruption was found. If there's any corruption it may be silent corruption... I've cases where the VM was new, just installed, run dnf -y update to get the updated packages, rebooted, and boom XFS corruption. So perhaps the motion process isn't the one to blame.
>
> But, in fact, I remember when moving a VM that it went down during the process and when I rebooted it was corrupted. But this may not seems related. It perhaps was already in a inconsistent state.
>
> Anyway, here's the mount options:
>
> Host1:
> 192.168.10.14:/mnt/pool0/ovirt/vm on
> /rhev/data-center/mnt/192.168.10.14:_mnt_pool0_ovirt_vm type nfs4
> (rw,relatime,vers=4.1,rsize=131072,wsize=131072,namlen=255,soft,noshar
> ecache,proto=tcp,timeo=100,retrans=3,sec=sys,clientaddr=192.168.10.1,l
> ocal_lock=none,addr=192.168.10.14)
>
> Host2:
> 192.168.10.14:/mnt/pool0/ovirt/vm on
> /rhev/data-center/mnt/192.168.10.14:_mnt_pool0_ovirt_vm type nfs4
> (rw,relatime,vers=4.1,rsize=131072,wsize=131072,namlen=255,soft,noshar
> ecache,proto=tcp,timeo=100,retrans=3,sec=sys,clientaddr=192.168.10.1,l
> ocal_lock=none,addr=192.168.10.14)
>
> The options are the default ones. I haven't changed anything when configuring this cluster.
>
> Thanks.
>
>
>
> -----Original Message-----
> From: Strahil Nikolov <hunter86_bg(a)yahoo.com>
> Sent: Saturday, November 28, 2020 1:54 PM
> To: users <users(a)ovirt.org>; Vinícius Ferrão
> <ferrao(a)versatushpc.com.br>
> Subject: Re: [ovirt-users] Constantly XFS in memory corruption inside
> VMs
>
> Can you try with a test vm, if this happens after a Virtual Machine migration ?
>
> What are your mount options for the storage domain ?
>
> Best Regards,
> Strahil Nikolov
>
>
>
>
>
>
> В събота, 28 ноември 2020 г., 18:25:15 Гринуич+2, Vinícius Ferrão via Users <users(a)ovirt.org> написа:
>
>
>
>
>
>
>
>
> Hello,
>
>
>
> I’m trying to discover why an oVirt 4.4.3 Cluster with two hosts and NFS shared storage on TrueNAS 12.0 is constantly getting XFS corruption inside the VMs.
>
>
>
> For random reasons VM’s gets corrupted, sometimes halting it or just being silent corrupted and after a reboot the system is unable to boot due to “corruption of in-memory data detected”. Sometimes the corrupted data are “all zeroes”, sometimes there’s data there. In extreme cases the XFS superblock 0 get’s corrupted and the system cannot even detect a XFS partition anymore since the magic XFS key is corrupted on the first blocks of the virtual disk.
>
>
>
> This is happening for a month now. We had to rollback some backups, and I don’t trust anymore on the state of the VMs.
>
>
>
> Using xfs_db I can see that some VM’s have corrupted superblocks but the VM is up. One in specific, was with sb0 corrupted, so I knew when a reboot kicks in the machine will be gone, and that’s exactly what happened.
>
>
>
> Another day I was just installing a new CentOS 8 VM for random reasons, and after running dnf -y update and a reboot the VM was corrupted needing XFS repair. That was an extreme case.
>
>
>
> So, I’ve looked on the TrueNAS logs, and there’s apparently nothing wrong on the system. No errors logged on dmesg, nothing on /var/log/messages and no errors on the “zpools”, not even after scrub operations. On the switch, a Catalyst 2960X, we’ve been monitoring it and all it’s interfaces. There are no “up and down” and zero errors on all interfaces (we have a 4x Port LACP on the TrueNAS side and 2x Port LACP on each hosts), everything seems to be fine. The only metric that I was unable to get is “dropped packages”, but I’m don’t know if this can be an issue or not.
>
>
>
> Finally, on oVirt, I can’t find anything either. I looked on /var/log/messages and /var/log/sanlock.log but there’s nothing that I found suspicious.
>
>
>
> Is there’s anyone out there experiencing this? Our VM’s are mainly CentOS 7/8 with XFS, there’s 3 Windows VM’s that does not seems to be affected, everything else is affected.
>
>
>
> Thanks all.
>
>
>
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org Privacy
> Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/VLYSE7HC
> FNWTWFZZTL2EJHV36OENHUGB/
_______________________________________________
Users mailing list -- users(a)ovirt.org
To unsubscribe send an email to users-leave(a)ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CZ5E55LJMA7...
4 years, 3 months
upload_disk.py - CLI Upload Disk
by Jorge Visentini
Hi All.
I'm using version 4.4.4 (latest stable version -
ovirt-node-ng-installer-4.4.4-2020122111.el8.iso)
I tried using the upload_disk.py script, but I don't think I knew how to
use it.
When I try to use it, these errors occur:
*python3 /usr/share/doc/python3-ovirt-engine-sdk4/examples/upload_disk.py
disk01-SO.qcow2 --disk-format qcow2 --sd-name ISOsusage: upload_disk.py
[-h] -c CONFIG [--debug] [--logfile LOGFILE]
[--disk-format {raw,qcow2}] [--disk-sparse]
[--enable-backup] --sd-name SD_NAME [--use-proxy]
[--max-workers MAX_WORKERS] [--buffer-size BUFFER_SIZE]
filenameupload_disk.py: error: the following arguments are required:
-c/--config*
Using the upload_disk.py help:
*python3 upload_disk.py --help-c CONFIG, --config CONFIG
Use engine connection details from [CONFIG] section in
~/.config/ovirt.conf.*
This CONFIG, is the API access configuration for authentication? Because
analyzing the script I did not find this information.
Does this new version work differently or am I doing something wrong?
In the sdk_4.3 version of upload_disk.py I had to change the script to add
the access information, but it worked.
*[root@engineteste01 ~]# python3 upload_disk.py disk01-SO.qcow2Checking
image...Disk format: qcow2Disk content type: dataConnecting...Creating
disk...Creating transfer session...Uploading image...Uploaded
20.42%Uploaded 45.07%Uploaded 68.89%Uploaded 94.45%Uploaded 2.99g in 42.17
seconds (72.61m/s)Finalizing transfer session...Upload completed
successfully[root@engineteste01 ~]#*
Thank you all!!
--
Att,
Jorge Visentini
+55 55 98432-9868
4 years, 3 months
Windows 7 vm lost network connection under heavy network load
by Joey Ma
Hi folks,
Happy holidays.
I'm having an urgent problem :smile: .
I've installed oVirt 4.4.2 on CentOS 8.2 and then created several Windows 7
vms for stress testing. I found that the heavy network load would lead to
the e1000 net card NOT able to receive packets, it seemed totally blocked.
In the meantime, packet sending was fine.
Only re-enabling the net card can restore the network. Has anyone also had
this problem? Looking forward to your insights. Much appreciated.
Best regards,
Joey
4 years, 3 months
Cannot upgrade cluster to v4.5 (All hosts are CentOS 8.3.2011)
by Gilboa Davara
Hello all,
I'm more-or-less finished building a new ovirt over glusterfs cluster with
3 fairly beefy servers.
Nodes were fully upgraded to CentOS Linux release 8.3.2011 before they
joined the cluster.
Looking at the cluster view in the WebUI, I get an exclamation mark with
the following message: "Upgrade cluster compatibility level".
When I try to upgrade the cluster, 2 of the 3 hosts go into maintenance and
reboot, but once the procedure is complete, the cluster version remains the
same.
Looking at the host vdsm logs, I see that once the engine refreshes their
capabilities, all hosts return 4.2-4.4 and not 4.5.
E.g.
'supportedENGINEs': ['4.2', '4.3', '4.4'], 'clusterLevels': ['4.2', '4.3',
'4.4']
I assume I should be seeing 4.5 after the upgrade, no?
AmI missing something?
Thanks,
- Gilboa
4 years, 3 months
Best Practice? Affinity Rules Enforcement Manager or High Availability?
by souvaliotimaria@mail.com
Hello everyone,
Not sure if I should ask this here as it seems to be a pretty obvious question but here it is.
What is the best solution for making your VMs able to automatically boot up on another working host when something goes wrong (gluster problem, non responsive host etc)? Would you enable the Affinity Manager and enforce some policies or would you set the VMs you want as Highly Available?
Thank you very much for your time!
Best regards,
Maria Souvalioti
4 years, 3 months
Re: Breaking up a oVirt cluster on storage domain boundary.
by Strahil Nikolov
> Can I migrate storage domains, and thus all the VMs within that
> storage domain?
>
>
>
> Or will I need to build new cluster, with new storage domains, and
> migrate the VMs?
>
>
Actually you can create a new cluster and ensure that the Storage
domains are accessible by that new cluster.
Then to migrate, you just need to power off the VM, Edit -> change
cluster, network, etc and power it up.
It will start on the hosts in the new cluster and then you just need to
verify that the application is working properly.
Best Regards,
Strahil Nikolov
4 years, 3 months
Breaking up a oVirt cluster on storage domain boundary.
by Matthew.Stier@fujitsu.com
Is it possible to break up an oVirt cluster into multiple clusters?
I have an Oracle Linux Virtualization Manager 4.3.6 cluster (think oVirt 4.3.6) that is hosting four different classes of VM's.
I have acquired some additional hosts, and instead of adding them to my "default" cluster, I want to create three new clusters, and migrate each class of VM to its own cluster.
Each class has its own network, and storage domains.
Can I migrate storage domains, and thus all the VMs within that storage domain?
Or will I need to build new cluster, with new storage domains, and migrate the VMs?
Most of my VMs are built from templates. I assume that those cloned from templates will not be an issue. But some of my classes are thin provisioned, and I suspect I will have an issue with migrating those at the VM level, which is why I want to migrate them at the storage domain level.
4 years, 3 months
Shrink iSCSI Domain
by Vinícius Ferrão
Hello,
Is there any way to reduce the size of an iSCSI Storage Domain? I can’t seem to figure this myself. It’s probably unsupported, and the path would be create a new iSCSI Storage Domain with the reduced size and move the virtual disks to there and them delete the old one.
But I would like to confirm if this is the only way to do this…
In the past I had a requirement, so I’ve created the VM Domains with 10TB, now it’s just too much, and I need to use the space on the storage for other activities.
Thanks all and happy new year.
4 years, 3 months
"POSIX storage based vm live migration failed"
by Tarun Kushwaha
After upgrading cluster version to 4.5 VM live migration failed showing error due to storage I/O error and VM status is being changed running to Pause , lower cluster version same is working
4 years, 3 months
Re: New to oVirt - HE and Metric Store Question
by Strahil Nikolov
I can't recall the exact issue that was reported in the mailing list, but I remember that the user had to power off the engine and the VMs... the Devs can clearly indicate the risks of running the HostedEngine with other VMs on the same storage domain.
Based on Red Hat's RHV documentation the following warning is clearly indicating some of the reasons:
Creating additional data storage domains in the same data center as the self-hosted engine storage domain is highly recommended. If you deploy the self-hosted engine in a data center with only one active data storage domain, and that storage domain is corrupted, you will not be able to add new storage domains or remove the corrupted storage domain; you will have to redeploy the self-hosted engine.
Another one from Red Hat's Self-Hosted Engine Recommendations:
В понеделник, 28 декември 2020 г., 19:40:15 Гринуич+2, Nur Imam Febrianto <nur_imam(a)outlook.com> написа:
What kind of situation is that ? If so, how can I migrate my hosted engine into another storage domain ?
Regards,
Nur Imam Febrianto
From: Strahil Nikolov
Sent: 29 December 2020 0:31
To: oVirt Users; Nur Imam Febrianto
Subject: Re: [ovirt-users] New to oVirt - HE and Metric Store Question
> 1. Right now we are using one SAN with 4 LUN (each mapped into 1 >specific volume) and configure the storage domain for each LUn (1 LUN = 1 >Storage Domain). Is this configuration are good ? One more, about Hosted >Engine, when we setup the cluster, it provision one storage domain, but >the storage domain is not exlusively used by Hosted Engine, we use it too >for other VM. Are this OK or it have a side impact ?
Avoid using HostedEngine's storage domain for other VMs.You might get into situation that you want to avoid.
Best Regards,
Strahil Nikolov
4 years, 3 months
Cannot run yum, no internet connection on a oVirt node
by jenia.ivlev@gmail.com
Hello.
I need help setting up the networking on the "self hosted engine" VM.
I can't run yum in the self-hosted engine [1], there seems not to be any internet connection. I get the error "couldn't resolve host: mirrors.fedoraproject.org. Same thing when I do cURL google.com for example. Also I can't ssh to this VM from my physical machine either. I get "connection refused". I can ping the "self-hosted engine" VM though.
Here is the output of ip addr on the "self hosted engine" VM:
2. enp1s0: .... state UP
link/ether 52.54....
3. virbr0: ... state DOWN
inet 192.168.122.1
Here is the ip addr output on the host machine:
3. virbr0: ... state UP
inet 192.168.122.1
Can someone please help me setup networking on the "self hosted engine" VM.
Thanks kindly.
Happy holidays.
[1] https://www.ovirt.org/documentation/installing_ovirt_as_a_self-hosted_eng...
4 years, 3 months
Re: New to oVirt - HE and Metric Store Question
by Strahil Nikolov
> 1. Right now we are using one SAN with 4 LUN (each mapped into 1 >specific volume) and configure the storage domain for each LUn (1 LUN = 1 >Storage Domain). Is this configuration are good ? One more, about Hosted >Engine, when we setup the cluster, it provision one storage domain, but >the storage domain is not exlusively used by Hosted Engine, we use it too >for other VM. Are this OK or it have a side impact ?
Avoid using HostedEngine's storage domain for other VMs.You might get into situation that you want to avoid.
Best Regards,
Strahil Nikolov
4 years, 3 months
Moving VMs from 4.3.9 to 4.4.4
by Diggy Mc
What is the best way to move VMs from current 4.3.9 to a new 4.4.4. environment?
The current environment is running oVirt 4.3.9 (started as 4.3.7). There is a single data domain using NFS on a single NAS.
The future oVirt 4.4.4 environment will also have a single data domain using NFS on the same NAS as the current environment.
What is the best way to move approximately 40 VMs from current 4.3.9 to a new 4.4.4. environment?
Are there any caveats or considerations I need to take into account when moving the VMs?
Your help, as always, is greatly appreciated.
4 years, 3 months
New to oVirt - HE and Metric Store Question
by Nur Imam Febrianto
Hi,
We’re migrating our 10 physical host with 50 small VM to oVirt from VMware and so happy with its performance. There are two question that we want to ask :
1. Right now we are using one SAN with 4 LUN (each mapped into 1 specific volume) and configure the storage domain for each LUn (1 LUN = 1 Storage Domain). Is this configuration are good ? One more, about Hosted Engine, when we setup the cluster, it provision one storage domain, but the storage domain is not exlusively used by Hosted Engine, we use it too for other VM. Are this OK or it have a side impact ?
2. We’re kind of interested using Metric Store (Need performance metric history over the time), is it possible to deploy the Metric Store using smaller resources ? Our hardware is so small, we cant afford to use that lot of resources for the Metric Store.
Thanks before.
4 years, 3 months
power management iLO5
by ozmen62@hotmail.com
Hi,
I've red mail list about IPMI fencing. But clould not find a solution.
We have HP gen 10 servers. they have iLO5 and fencing does not work.
In redhat there is a solution
Issue
No iLO5 fencing method available in RHV-manager
Power Management via IPMILAN does not work on HPE Gen 10 systems
Power Management via iLO4 does not work on HPE Gen 10 systems
Resolution
The IPMI interface has to be enabled in the security section of a iLO-5 on a HPE Gen 10 system in order to use the fence-agents iLO3, iLO4 or IPMILAN.
A user with Login permissions and Operator level access for IPMI should be sufficient for fencing.
Root Cause
The default settings of iLO-5 on a HPE Gen 10 system have disabled IPMI access to the management controller. The fence agents for iLO-3 onward default to be using IPMI in order to communicate with the management controller. Therefore iLO3, iLO4 as well as IPMILAN fail to connect to a iLO-5 management controller.
But, there is no IPMI option in power management.
How can i enable power managemenet properly with iLO5 or IPMI?
Thanks
4 years, 3 months
cinder backup fail
by ozmen62@hotmail.com
Hi,
After 4.4.4 upgrade , i've enabled grafana.
Now, there is an error while backing up
Backing up:
Notifying engine
- Files
- DWH database 'ovirt_engine_history_20201223073120'
- CINDERLIB database 'ovirt_cinderlib'
Notifying engine
FATAL: Database ovirt_cinderlib backup failed
in log file, it says;
pg_dump: error: connection to database "ovirt_cinderlib" failed: FATAL: Ident authentication failed for user "ovirt_cinderlib"
could you help to fix it?
4 years, 3 months
4.4.4 hosted engine deployment failure
by Diggy Mc
While trying to deploy a 4.4.4 hosted engine via the cockpit interface, I received the following error(s):
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Obtain SSO token using username/password credentials]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Wait for the host to be up]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Notify the user about a failure]
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Host is not up, please check logs, perhaps also on the engine machine"}
...
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Notify the user about a failure]
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "The system may not be provisioned according to the playbook results: please check the logs for the issue, fix accordingly or re-deploy from scratch.\n"}
I am trying to build a new oVirt 4.4.4 environment using Oracle Linux 8.3 host servers. I created /var on a separate LV from / (root). I also configured adapter bond per the docs and the same way I did with my 4.3.7 environment.
Did I miss a configuration requirement for the OL 8.3 installation? Where can I find more details about the error(s)?
4 years, 3 months
engine storage fail after upgrade
by ozmen62@hotmail.com
Hi,
After upgrade from 4.3 to 4.4 pop-ups some error on engine.
It becomes unavailable several times for a 2-3 minutes and cames back in a day
after some research on the system, found some logs
On THE hosted_storage there are 2 events
1- Failed to update VMs/Templates OVF data for Storage Domain hosted_storage in Data Center XXX
2- Failed to update OVF disks 9cbb34d0-06b0-4ce7-a3fa-7dfed689c442, OVF data isn't updated on those OVF stores (Data Center XXX, Storage Domain hosted_storage).
Is there any idea how can i fix this?
4 years, 4 months
FW: oVirt 4.4 and Active directory
by Latchezar Filtchev
Hello ,
I think I resolved this issue. It is dig response when resolving the domain name!
CentOS-7 - bind-utils-9.11.4-16.P2.el7_8.6.x86_64; Windows AD level 2008R2; in my case dig returns answer with
;; ANSWER SECTION:
mb118.local. 600 IN A 192.168.1.7
IP address returned is address of DC
CentOS-8 - bind-utils-9.11.20-5.el8.x86_64; Same Domain Controller; dig returns answer without ;;ANSWER SECTION e.g. IP address of DC cannot be identified.
The solution is to add directive '+nocookie', after '+tcp' in the file /usr/share/ovirt-engine-extension-aaa-ldap/setup/plugins/ovirt-engine-extension-aaa-ldap/ldap/common.py
The section starts at line 144:
@staticmethod
def _resolver(plugin, record, what):
rc, stdout, stderr = plugin.execute(
args=(
(
plugin.command.get('dig'),
'+noall',
'+answer',
'+tcp',
'+nocookie',
what,
record
)
),
)
return stdout
With this change execution of ovirt-engine-extension-aaa-ldap-setup completes successfully and joins fresh install of oVirt 4.4 to Active Directory.
If level of AD is 2016 '+nocookie' change is not needed.
Happy holydays to all of you!
Stay safe!
Thank you!
Best,
Latcho
From: Latchezar Filtchev
Sent: Tuesday, November 24, 2020 10:31 AM
To: users(a)ovirt.org
Subject: oVirt 4.4 and Active directory
Hello All,
Fresh standalone installation of oVirt 4.3 (CentOS 7) . Execution of ovirt-engine-extension-aaa-ldap-setup completes normally and DC is connected to AD (Domain functional level: Windows Server 2008 ).
On the same hardware fresh standalone installation of oVirt 4.4.
Installation of engine completed with warning:
2020-11-23 14:50:46,159+0200 WARNING otopi.plugins.ovirt_engine_common.base.network.hostname hostname._validateFQDNresolvability:308 Failed to resolve 44-8.mb118.local using DNS, it can be resolved only locally
Despite warning engine portal is resolvable after installation.
Execution of ovirt-engine-extension-aaa-ldap-setup ends with:
[ INFO ] Stage: Environment customization
Welcome to LDAP extension configuration program
Available LDAP implementations:
1 - 389ds
2 - 389ds RFC-2307 Schema
3 - Active Directory
4 - IBM Security Directory Server
5 - IBM Security Directory Server RFC-2307 Schema
6 - IPA
7 - Novell eDirectory RFC-2307 Schema
8 - OpenLDAP RFC-2307 Schema
9 - OpenLDAP Standard Schema
10 - Oracle Unified Directory RFC-2307 Schema
11 - RFC-2307 Schema (Generic)
12 - RHDS
13 - RHDS RFC-2307 Schema
14 - iPlanet
Please select: 3
Please enter Active Directory Forest name: mb118.local
[ INFO ] Resolving Global Catalog SRV record for mb118.local
[WARNING] Cannot resolve Global Catalog SRV record for mb118.local. Please check you have entered correct Active Directory forest name and check that forest is resolvable by your system DNS servers
[ ERROR ] Failed to execute stage 'Environment customization': Active Directory forest is not resolvable, please make sure you've entered correct forest name. If for some reason you can't use forest and you need some special configuration instead, please refer to examples directory provided by ovirt-engine-extension-aaa-ldap package.
[ INFO ] Stage: Clean up
Log file is available at /tmp/ovirt-engine-extension-aaa-ldap-setup-20201123113909-bj749k.log:
[ INFO ] Stage: Pre-termination
[ INFO ] Stage: Termination
Can someone advise on this?
Thank you!
Best,
Latcho
4 years, 4 months
Ovirt VM import issue
by Deekshith
Hi Team ,
We are not able to import the Virtual machine from ova file into Ovirt
.Kindly help us
Regards
Deekshith
4 years, 4 months
Unable to live migrate a VM from 4.4.2 to 4.4.3 CentOS Linux host
by Gianluca Cecchi
Hello,
I was able to update an external CentOS Linux 8.2 standalone engine from
4.4.2 to 4.4.3 (see dedicated thread).
Then I was able to put into maintenance one 4.4.2 host (CentOS Linux 8.2
based, not ovirt node ng) and run:
[root@ov301 ~]# dnf update
Last metadata expiration check: 0:27:11 ago on Wed 11 Nov 2020 08:48:04 PM
CET.
Dependencies resolved.
======================================================================================================================
Package Arch Version
Repository Size
======================================================================================================================
Installing:
kernel x86_64 4.18.0-193.28.1.el8_2
BaseOS 2.8 M
kernel-core x86_64 4.18.0-193.28.1.el8_2
BaseOS 28 M
kernel-modules x86_64 4.18.0-193.28.1.el8_2
BaseOS 23 M
ovirt-ansible-collection noarch 1.2.1-1.el8
ovirt-4.4 276 k
replacing ovirt-ansible-engine-setup.noarch 1.2.4-1.el8
replacing ovirt-ansible-hosted-engine-setup.noarch 1.1.8-1.el8
Upgrading:
ansible noarch 2.9.15-2.el8
ovirt-4.4-centos-ovirt44 17 M
bpftool x86_64 4.18.0-193.28.1.el8_2
BaseOS 3.4 M
cockpit-ovirt-dashboard noarch 0.14.13-1.el8
ovirt-4.4 3.5 M
ioprocess x86_64 1.4.2-1.el8
ovirt-4.4 37 k
kernel-tools x86_64 4.18.0-193.28.1.el8_2
BaseOS 3.0 M
kernel-tools-libs x86_64 4.18.0-193.28.1.el8_2
BaseOS 2.8 M
libiscsi x86_64 1.18.0-8.module_el8.2.0+524+f765f7e0
AppStream 89 k
nftables x86_64 1:0.9.3-12.el8_2.1
BaseOS 311 k
ovirt-hosted-engine-ha noarch 2.4.5-1.el8
ovirt-4.4 325 k
ovirt-hosted-engine-setup noarch 2.4.8-1.el8
ovirt-4.4 227 k
ovirt-imageio-client x86_64 2.1.1-1.el8
ovirt-4.4 21 k
ovirt-imageio-common x86_64 2.1.1-1.el8
ovirt-4.4 155 k
ovirt-imageio-daemon x86_64 2.1.1-1.el8
ovirt-4.4 15 k
ovirt-provider-ovn-driver noarch 1.2.32-1.el8
ovirt-4.4 27 k
ovirt-release44 noarch 4.4.3-1.el8
ovirt-4.4 17 k
python3-ioprocess x86_64 1.4.2-1.el8
ovirt-4.4 33 k
python3-nftables x86_64 1:0.9.3-12.el8_2.1
BaseOS 25 k
python3-ovirt-engine-sdk4 x86_64 4.4.6-1.el8
ovirt-4.4 560 k
python3-perf x86_64 4.18.0-193.28.1.el8_2
BaseOS 2.9 M
python3-pyasn1 noarch 0.4.6-3.el8
ovirt-4.4-centos-opstools 140 k
python3-pyasn1-modules noarch 0.4.6-3.el8
ovirt-4.4-centos-opstools 151 k
qemu-img x86_64 15:4.2.0-29.el8.6
ovirt-4.4-advanced-virtualization 1.0 M
qemu-kvm x86_64 15:4.2.0-29.el8.6
ovirt-4.4-advanced-virtualization 118 k
qemu-kvm-block-curl x86_64 15:4.2.0-29.el8.6
ovirt-4.4-advanced-virtualization 129 k
qemu-kvm-block-gluster x86_64 15:4.2.0-29.el8.6
ovirt-4.4-advanced-virtualization 131 k
qemu-kvm-block-iscsi x86_64 15:4.2.0-29.el8.6
ovirt-4.4-advanced-virtualization 136 k
qemu-kvm-block-rbd x86_64 15:4.2.0-29.el8.6
ovirt-4.4-advanced-virtualization 130 k
qemu-kvm-block-ssh x86_64 15:4.2.0-29.el8.6
ovirt-4.4-advanced-virtualization 131 k
qemu-kvm-common x86_64 15:4.2.0-29.el8.6
ovirt-4.4-advanced-virtualization 1.2 M
qemu-kvm-core x86_64 15:4.2.0-29.el8.6
ovirt-4.4-advanced-virtualization 3.4 M
selinux-policy noarch 3.14.3-41.el8_2.8
BaseOS 615 k
selinux-policy-targeted noarch 3.14.3-41.el8_2.8
BaseOS 15 M
spice-server x86_64 0.14.2-1.el8_2.1
AppStream 404 k
tzdata noarch 2020d-1.el8
BaseOS 471 k
vdsm x86_64 4.40.35.1-1.el8
ovirt-4.4 1.4 M
vdsm-api noarch 4.40.35.1-1.el8
ovirt-4.4 106 k
vdsm-client noarch 4.40.35.1-1.el8
ovirt-4.4 24 k
vdsm-common noarch 4.40.35.1-1.el8
ovirt-4.4 136 k
vdsm-hook-ethtool-options noarch 4.40.35.1-1.el8
ovirt-4.4 9.8 k
vdsm-hook-fcoe noarch 4.40.35.1-1.el8
ovirt-4.4 10 k
vdsm-hook-openstacknet noarch 4.40.35.1-1.el8
ovirt-4.4 18 k
vdsm-hook-vhostmd noarch 4.40.35.1-1.el8
ovirt-4.4 17 k
vdsm-hook-vmfex-dev noarch 4.40.35.1-1.el8
ovirt-4.4 11 k
vdsm-http noarch 4.40.35.1-1.el8
ovirt-4.4 15 k
vdsm-jsonrpc noarch 4.40.35.1-1.el8
ovirt-4.4 31 k
vdsm-network x86_64 4.40.35.1-1.el8
ovirt-4.4 331 k
vdsm-python noarch 4.40.35.1-1.el8
ovirt-4.4 1.3 M
vdsm-yajsonrpc noarch 4.40.35.1-1.el8
ovirt-4.4 40 k
Installing dependencies:
NetworkManager-ovs x86_64 1:1.22.14-1.el8
ovirt-4.4-copr:copr.fedorainfracloud.org:networkmanager:NetworkManager-1.22
144 k
Transaction Summary
======================================================================================================================
Install 5 Packages
Upgrade 48 Packages
Total download size: 116 M
After reboot I can activate the host (strange that I see many pop up
messages about "finished activating host") and the host is shown as
OS Version: RHEL - 8.2 - 2.2004.0.2.el8
OS Description: CentOS Linux 8 (Core)
Kernel Version: 4.18.0 - 193.28.1.el8_2.x86_64
KVM Version: 4.2.0 - 29.el8.6
LIBVIRT Version: libvirt-6.0.0-25.2.el8
VDSM Version: vdsm-4.40.35.1-1.el8
SPICE Version: 0.14.2 - 1.el8_2.1
GlusterFS Version: [N/A]
CEPH Version: librbd1-12.2.7-9.el8
Open vSwitch Version: [N/A]
Nmstate Version: nmstate-0.2.10-1.el8
Kernel Features: MDS: (Vulnerable: Clear CPU buffers attempted, no
microcode; SMT vulnerable), L1TF: (Mitigation: PTE Inversion; VMX:
conditional cache flushes, SMT vulnerable), SRBDS: (Not affected),
MELTDOWN: (Mitigation: PTI), SPECTRE_V1: (Mitigation: usercopy/swapgs
barriers and __user pointer sanitization), SPECTRE_V2: (Mitigation: Full
generic retpoline, IBPB: conditional, IBRS_FW, STIBP: conditional, RSB
filling), ITLB_MULTIHIT: (KVM: Mitigation: Split huge pages),
TSX_ASYNC_ABORT: (Not affected), SPEC_STORE_BYPASS: (Mitigation:
Speculative Store Bypass disabled via prctl and seccomp)
VNC Encryption: Disabled
FIPS mode enabled: Disabled
while another host still in 4.4.2:
OS Version: RHEL - 8.2 - 2.2004.0.2.el8
OS Description: CentOS Linux 8 (Core)
Kernel Version: 4.18.0 - 193.19.1.el8_2.x86_64
KVM Version: 4.2.0 - 29.el8.3
LIBVIRT Version: libvirt-6.0.0-25.2.el8
VDSM Version: vdsm-4.40.26.3-1.el8
SPICE Version: 0.14.2 - 1.el8
GlusterFS Version: [N/A]
CEPH Version: librbd1-12.2.7-9.el8
Open vSwitch Version: [N/A]
Nmstate Version: nmstate-0.2.10-1.el8
Kernel Features: MDS: (Vulnerable: Clear CPU buffers attempted, no
microcode; SMT vulnerable), L1TF: (Mitigation: PTE Inversion; VMX:
conditional cache flushes, SMT vulnerable), SRBDS: (Not affected),
MELTDOWN: (Mitigation: PTI), SPECTRE_V1: (Mitigation: usercopy/swapgs
barriers and __user pointer sanitization), SPECTRE_V2: (Mitigation: Full
generic retpoline, IBPB: conditional, IBRS_FW, STIBP: conditional, RSB
filling), ITLB_MULTIHIT: (KVM: Mitigation: Split huge pages),
TSX_ASYNC_ABORT: (Not affected), SPEC_STORE_BYPASS: (Mitigation:
Speculative Store Bypass disabled via prctl and seccomp)
VNC Encryption: Disabled
FIPS mode enabled: Disabled
But if I try to move away VMs from the 4.4.2 host to the 4.4.3 one I get
error:
Failed to migrate VM c8client to Host ov301 . Trying to migrate to another
Host.
(btw: there is no other active host; there is a ov300 host that is in
maintenance)
No available host was found to migrate VM c8client to.
It seems the root error in engine.log is:
2020-11-11 21:44:42,487+01 ERROR
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
(ForkJoinPool-1-worker-11) [] Migration of VM 'c8client' to host 'ov301'
failed: VM destroyed during the startup.
On target host in /var/log/libvirt/qemu/c8clinet.log I see:
2020-11-11 20:44:40.981+0000: shutting down, reason=failed
In target vdsm.log
2020-11-11 21:44:39,958+0100 INFO (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC
call VM.migrationCreate took more than 1.00
seconds to succeed: 1.97 (__init__:316)
2020-11-11 21:44:40,230+0100 INFO (periodic/3) [vdsm.api] START
repoStats(domains=()) from=internal,
task_id=cb51fd4a-09d3-4d77-821b-391da2467487 (api:48)
2020-11-11 21:44:40,231+0100 INFO (periodic/3) [vdsm.api] FINISH repoStats
return={'fa33df49-b09d-4f86-9719-ede649542c21': {'code': 0, 'lastCheck':
'4.1', 'delay': '0.000836715', 'valid': True, 'version': 4, 'acquired':
True, 'actual': True}} from=internal,
task_id=cb51fd4a-09d3-4d77-821b-391da2467487 (api:54)
2020-11-11 21:44:41,929+0100 INFO (jsonrpc/5) [api.virt] START
destroy(gracefulAttempts=1) from=::ffff:10.4.192.32,52266,
vmId=c95da734-7ed1-4caa-bacb-3fa24f4efb56 (api:48)
2020-11-11 21:44:41,930+0100 INFO (jsonrpc/5) [virt.vm]
(vmId='c95da734-7ed1-4caa-bacb-3fa24f4efb56') Release VM resources (vm:4666)
2020-11-11 21:44:41,930+0100 INFO (jsonrpc/5) [virt.vm]
(vmId='c95da734-7ed1-4caa-bacb-3fa24f4efb56') Stopping connection
(guestagent:444)
2020-11-11 21:44:41,930+0100 INFO (jsonrpc/5) [vdsm.api] START
teardownImage(sdUUID='fa33df49-b09d-4f86-9719-ede649542c21',
spUUID='ef17cad6-7724-4cd8-96e3-9af6e529db51',
imgUUID='ff10a405-cc61-4d00-a83f-3ee04b19f381', volUUID=None)
from=::ffff:10.4.192.32,52266, task_id=177461c0-83d6-4c90-9c5c-3cc8ee9150c7
(api:48)
It seems that during the host update the OVN configuration has not been
maintained.
Right now all my active VMs are with at least a vnic on OVN so I cannot
test the scenario of migrating a VM without OVN based vnic.
In fact on engine I see only the currently active host in 4.4.2 (ov200) and
another host that is in maintenance (it is still in 4.3.10; I wanted to
update to 4.4.2 but I realized that 4.4.3 has been out...):
[root@ovmgr1 ovirt-engine]# ovn-sbctl show
Chassis "6a46b802-5a50-4df5-b1af-e73f58a57164"
hostname: "ov200.mydomain"
Encap geneve
ip: "10.4.192.32"
options: {csum="true"}
Port_Binding "2ae7391b-4297-4247-a315-99312f6392e6"
Port_Binding "c1ec60a4-b4f3-4cb5-8985-43c086156e83"
Port_Binding "174b69f8-00ed-4e25-96fc-7db11ea8a8b9"
Port_Binding "66359e79-56c4-47e0-8196-2241706329f6"
Port_Binding "ccbd6188-78eb-437b-9df9-9929e272974b"
Chassis "ddecf0da-4708-4f93-958b-6af365a5eeca"
hostname: "ov300.mydomain"
Encap geneve
ip: "10.4.192.33"
options: {csum="true"}
[root@ovmgr1 ovirt-engine]#
Any hint about the reason of losing OVN config for ov301 and the correct
procedure to get it again and persiste future updates?
NOTE: this was a cluster in 4.3.10 and I updated it to 4.4.2 and I noticed
that the OVN config was not retained and I had to run on hosts:
[root@ov200 ~]# vdsm-tool ovn-config engine_ip ov200_ip_on_mgmt
Using default PKI files
Created symlink
/etc/systemd/system/multi-user.target.wants/openvswitch.service →
/usr/lib/systemd/system/openvswitch.service.
Created symlink
/etc/systemd/system/multi-user.target.wants/ovn-controller.service →
/usr/lib/systemd/system/ovn-controller.service.
[root@ov200 ~]#
Now it seems the problem persists...
Why do I have to run each time?
Gianluca
4 years, 4 months
Get Host Capabilities failed: Internal JSON-RPC error: {'reason': 'internal error: Duplicate key'}
by tommy
Hi,everyone:
I got this error in my ovirt env:
VDSM ooengh1.tltd.com command Get Host Capabilities failed: Internal
JSON-RPC error: {'reason': 'internal error: Duplicate key'}
Systemctl message is:
Dec 23 20:48:48 ooengh1.tltd.com vdsm[2431]: ERROR Internal server error
Traceback (most recent call
last):
File
"/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 345, in
_handle_request
res = method(**params)
File
"/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 198, in
_dynamicMethod
result = fn(*methodArgs)
File "<string>", line 2, in
getCapabilities
File
"/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 50, in method
ret = func(*args, **kwargs)
File
"/usr/lib/python2.7/site-packages/vdsm/API.py", line 1371, in
getCapabilities
c = caps.get()
File
"/usr/lib/python2.7/site-packages/vdsm/host/caps.py", line 93, in get
machinetype.compatible_cpu_models())
File
"/usr/lib/python2.7/site-packages/vdsm/common/cache.py", line 43, in
__call__
value = self.func(*args)
File
"/usr/lib/python2.7/site-packages/vdsm/machinetype.py", line 142, in
compatible_cpu_models
all_models =
domain_cpu_models(c, arch, cpu_mode)
File
"/usr/lib/python2.7/site-packages/vdsm/machinetype.py", line 97, in
domain_cpu_models
domcaps =
conn.getDomainCapabilities(None, arch, None, virt_type, 0)
File
"/usr/lib/python2.7/site-packages/vdsm/common/libvirtconnection.py", line
131, in wrapper
ret = f(*args, **kwargs)
File
"/usr/lib/python2.7/site-packages/vdsm/common/function.py", line 94, in
wrapper
return func(inst, *args,
**kwargs)
File
"/usr/lib64/python2.7/site-packages/libvirt.py", line 3844, in
getDomainCapabilities
if ret is None: raise
libvirtError ('virConnectGetDomainCapabilities() failed', conn=self)
libvirtError: internal error:
Duplicate key
AnyOne can help me ?
Thanks!
4 years, 4 months
Upgrade to 4.4.4
by Jonathan Baecker
Hello,
I'm running here a upgrade from 4.4.3 to latest 4.4.4, on a 3 node self
hosted cluster. The engine upgrade went fine and now I'm on host
upgrades. When I check there the updates it shows only
*ovirt-node-ng-image-update-4.4.4-1.el8.noarch.rpm*. For that I have run
manual updates on each host, with maintenance mode -> yum update -> reboot.
When I run now on the engine *cat /etc/redhat-release *it show:
*CentOS Linux release 8.3.2011*
But on my nodes it shows still:
*CentOS Linux release 8.2.2004 (Core)*
How can this be?
Best regards
Jonathan
4 years, 4 months
oVirt can not connect to KVM Libvirtd
by tommy
Hi,everyone!
butI tried using oVirt to Manage the KVM Libvirtd Server, like this:
But,it got error like this:
VDSM host1 command GetVmsNamesFromExternalProviderVDS failed: Cannot recv
data: Host key verification failed.: Connection reset by peer.
Should I configure SSH Host Key ? where and how to config it ?
Thanks!
4 years, 4 months
OVA export issues
by Jonathan Baecker
Hello,
I have here an older server, one host with external engine, 4.4.4-1.el8
installation, with a slow 2Gbit nfs storage. When I export OVAs from
smaller VMs (under 40GB) it works but with bigger one, around 95GB I
have problems.
I have already change the export target to a local folder, which makes
the process faster and the GUI shows no errors, but when I extract the
vm.ovf file, from the exported archive, it is not complete. It missing a
closing part from the xml definition. The end stops here: *</ovf:E*
By my test before, I was able to add the closing part by hand and import
the OVA, but then the VM had no disk attached. I don't know if the size
is the problem, my backup script which export the VMs to the export
domain works normal. I only know, that the error I can reproduce on
minimum 2 VMs.
The last part of the export log looks like:
/var/log/ovirt-engine/ansible-runner-service.log:2020-12-19
12:02:52,610 - runner_service.services.playbook - DEBUG -
cb_event_handler event_data={'uuid':
'2ac62a85-7efa-4d14-a941-c1880bd016fd', 'counter': 35, 'stdout':
'changed: [onode-2.example.org]', 'start_line': 34, 'end_line': 35,
'runner_ident': '9ac880ce-4219-11eb-bfd8-5254000e4c2c', 'event':
'runner_on_ok', 'pid': 1039034, 'created':
'2020-12-19T17:02:52.606639', 'parent_uuid':
'5254000e-4c2c-ed4e-fdf7-000000000022', 'event_data': {'playbook':
'ovirt-ova-export.yml', 'playbook_uuid':
'ee4a698c-f639-49c8-8fa9-af2778f0862d', 'play': 'all', 'play_uuid':
'5254000e-4c2c-ed4e-fdf7-000000000007', 'play_pattern': 'all',
'task': 'Rename the OVA file', 'task_uuid':
'5254000e-4c2c-ed4e-fdf7-000000000022', 'task_action': 'command',
'task_args': '', 'task_path':
'/usr/share/ovirt-engine/ansible-runner-service-project/project/roles/ovirt-ova-export-post-pack/tasks/main.yml:2',
'role': 'ovirt-ova-export-post-pack', 'host':
'onode-2.discovery.intern', 'remote_addr':
'onode-2.discovery.intern', 'res': {'cmd': ['mv',
'/mnt/intern/win2016-01.ova.tmp', '/mnt/intern/win2016-01.ova'],
'stdout': '', 'stderr': '', 'rc': 0, 'start': '2020-12-19
12:02:52.560917', 'end': '2020-12-19 12:02:52.572358', 'delta':
'0:00:00.011441', 'changed': True, 'invocation': {'module_args':
{'_raw_params': 'mv "/mnt/intern/win2016-01.ova.tmp"
"/mnt/intern/win2016-01.ova"', 'warn': True, '_uses_shell': False,
'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None,
'chdir': None, 'executable': None, 'creates': None, 'removes': None,
'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [],
'_ansible_no_log': False}, 'start': '2020-12-19T17:02:51.954144',
'end': '2020-12-19T17:02:52.605820', 'duration': 0.651676,
'event_loop': None, 'uuid': '2ac62a85-7efa-4d14-a941-c1880bd016fd'}}
Any ideas?
Best regards
Jonathan
4 years, 4 months
Glusternetwork
by Ariez Ahito
can someone help me try to assign gluster network
this is my current setup hosted-engine with stand alone gluster
ovirt host 1 host 2 host 3 (and so on)
eno1 192.168.0.10 ovirtmgmt
enfs20 vm network dmz_1 dmz_2 dmz3 (and so on)
enfs21 <--- i want to assign this physical network with gluster netowrk
gluster1 gluster 2 (and so on)
eno1 192.168.0.11
eno2
i followed the instruction in the offical documentation
Create the logical network for gluster traffic
Log in to the engine
Browse to the engine and log in using the administrative credentials you configured in Chapter 8, Deploy the Hosted Engine using the Cockpit UI.
Create a logical network for gluster traffic
Click the Networks tab and then click New. The New Logical Network wizard appears.
On the General tab of the wizard, provide a Name for the new logical network, and uncheck the VM Network checkbox.
On the Cluster tab of the wizard, uncheck the Required checkbox.
Click OK to create the new logical network.
Enable the new logical network for gluster
Click the Networks tab and select the new logical network.
Click the Clusters sub-tab and then click Manage Network. The Manage Network dialogue appears.
In the Manage Network dialogue, check the Migration Network and Gluster Network checkboxes.
Click OK to save.
Attach the gluster network to the host
Click the Hosts tab and select the host.
Click the Network Interfaces subtab and then click Setup Host Networks.
Drag and drop the newly created network to the correct interface.
Ensure that the Verify connectivity checkbox is checked.
Ensure that the Save network configuration checkbox is checked.
Click OK to save.
so when i got to network> then click the logical network i create (gluster_net)
cluster tab > network status is down?
then click manage network button and assign migration network and gluster network
4 years, 4 months
assigning logical networks to physical interface
by Ariez Ahito
Hi guys could someone help me or guide me on how to assign a logical network to a physical.
our last ovirt engine was version 4.0
but with this latest release 4.4 seems other options are gone missing now.
4 years, 4 months
Adding host to hosted engine fails
by Ariez Ahito
here is our setup
stand alone glusterfs storage replica3
10.33.50.33
10.33.50.34
10.33.50.35
we deployed hosted-engine and managed to connect to our glusterfs storage
now we are having issues adding hosts
here is the logs
dsm.gluster.exception.GlusterVolumesListFailedException: Volume list failed: rc=1 out=() err=['Command {self.cmd} failed with rc={self.rc} out={self.out!r} err={self.err!r}']
2020-12-17 14:22:27,106+0800 INFO (jsonrpc/4) [storage.StorageDomainCache] Invalidating storage domain cache (sdc:74)
2020-12-17 14:22:27,106+0800 INFO (jsonrpc/4) [vdsm.api] FINISH connectStorageServer return={'statuslist': [{'id': 'afa2d41a-d817-4f4a-bd35-5ffedd1fa65b', 'status': 4149}]} from=::ffff:10.33.0.10,50058, flow_id=6170eaa3, task_id=f00d28fa-077f-403a-8024-9f9b533bccb5 (api:54)
2020-12-17 14:22:27,107+0800 INFO (jsonrpc/4) [jsonrpc.JsonRpcServer] RPC call StoragePool.connectStorageServer took more than 1.00 seconds to succeed: 3.34 (__init__:316)
2020-12-17 14:22:27,213+0800 INFO (jsonrpc/6) [vdsm.api] START connectStorageServer(domType=7, spUUID='1abdb9e4-3f85-11eb-9994-00163e4e4935', conList=[{'password': '********', 'vfs_type': 'glusterfs', 'port': '', 'mnt_options': 'backup-volfile-servers=gluster3:gluster4', 'iqn': '', 'connection': 'gluster3:/VOL2', 'ipv6_enabled': 'false', 'id': '2fb6989d-b26b-42e7-af35-4e4cf718eebf', 'user': '', 'tpgt': '1'}, {'password': '********', 'vfs_type': 'glusterfs', 'port': '', 'mnt_options': 'backup-volfile-servers=gluster3:gluster4', 'iqn': '', 'connection': 'gluster3:/VOL3', 'ipv6_enabled': 'false', 'id': 'b7839bcd-c0e3-422c-8f2c-47351d24b6de', 'user': '', 'tpgt': '1'}], options=None) from=::ffff:10.33.0.10,50058, flow_id=6170eaa3, task_id=cfeb3401-54b9-4756-b306-88d4275c0690 (api:48)
2020-12-17 14:22:29,058+0800 INFO (periodic/1) [vdsm.api] START repoStats(domains=()) from=internal, task_id=e9648d47-2ffb-4387-9a72-af41ab51adf7 (api:48)
2020-12-17 14:22:29,058+0800 INFO (periodic/1) [vdsm.api] FINISH repoStats return={} from=internal, task_id=e9648d47-2ffb-4387-9a72-af41ab51adf7 (api:54)
2020-12-17 14:22:30,512+0800 ERROR (jsonrpc/6) [storage.HSM] Could not connect to storageServer (hsm:2444)
in the events tab
The error message for connection gluster3:/ISO returned by VDSM was: Failed to fetch Gluster Volume List
The error message for connection gluster3:/VOL1 returned by VDSM was: Failed to fetch Gluster Volume List
thanks
4 years, 4 months
Re: [EXT] Re: v4.4.3 Node Cockpit Gluster deploy fails
by Charles Lam
Thank you Donald! Your and Strahil's suggested solutions regarding disabling multipath for the nvme drives were correct. The Gluster deployment progressed much further but stalled at
TASK [gluster.features/roles/gluster_hci : Set granual-entry-heal on] **********
task path: /etc/ansible/roles/gluster.features/roles/gluster_hci/tasks/hci_volumes.yml:67
with
"stdout": "One or more bricks could be down. Please execute the command again after bringing all bricks online and finishing any pending heals\nVolume heal failed."
Specifically
TASK [gluster.features/roles/gluster_hci : Set granual-entry-heal on] **********
task path: /etc/ansible/roles/gluster.features/roles/gluster_hci/tasks/hci_volumes.yml:67
failed: [fmov1n1.sn.dtcorp.com] (item={'volname': 'engine', 'brick': '/gluster_bricks/engine/engine', 'arbiter': 0}) => {"ansible_loop_var": "item", "changed": true, "cmd": ["gluster", "volume", "heal", "engine", "granular-entry-heal", "enable"], "delta": "0:00:10.112451", "end": "2020-12-18 19:50:22.818741", "item": {"arbiter": 0, "brick": "/gluster_bricks/engine/engine", "volname": "engine"}, "msg": "non-zero return code", "rc": 107, "start": "2020-12-18 19:50:12.706290", "stderr": "", "stderr_lines": [], "stdout": "One or more bricks could be down. Please execute the command again after bringing all bricks online and finishing any pending heals\nVolume heal failed.", "stdout_lines": ["One or more bricks could be down. Please execute the command again after bringing all bricks online and finishing any pending heals", "Volume heal failed."]}
failed: [fmov1n1.sn.dtcorp.com] (item={'volname': 'data', 'brick': '/gluster_bricks/data/data', 'arbiter': 0}) => {"ansible_loop_var": "item", "changed": true, "cmd": ["gluster", "volume", "heal", "data", "granular-entry-heal", "enable"], "delta": "0:00:10.110165", "end": "2020-12-18 19:50:38.260277", "item": {"arbiter": 0, "brick": "/gluster_bricks/data/data", "volname": "data"}, "msg": "non-zero return code", "rc": 107, "start": "2020-12-18 19:50:28.150112", "stderr": "", "stderr_lines": [], "stdout": "One or more bricks could be down. Please execute the command again after bringing all bricks online and finishing any pending heals\nVolume heal failed.", "stdout_lines": ["One or more bricks could be down. Please execute the command again after bringing all bricks online and finishing any pending heals", "Volume heal failed."]}
failed: [fmov1n1.sn.dtcorp.com] (item={'volname': 'vmstore', 'brick': '/gluster_bricks/vmstore/vmstore', 'arbiter': 0}) => {"ansible_loop_var": "item", "changed": true, "cmd": ["gluster", "volume", "heal", "vmstore", "granular-entry-heal", "enable"], "delta": "0:00:10.113203", "end": "2020-12-18 19:50:53.767864", "item": {"arbiter": 0, "brick": "/gluster_bricks/vmstore/vmstore", "volname": "vmstore"}, "msg": "non-zero return code", "rc": 107, "start": "2020-12-18 19:50:43.654661", "stderr": "", "stderr_lines": [], "stdout": "One or more bricks could be down. Please execute the command again after bringing all bricks online and finishing any pending heals\nVolume heal failed.", "stdout_lines": ["One or more bricks could be down. Please execute the command again after bringing all bricks online and finishing any pending heals", "Volume heal failed."]}
As this is a different issue, I will post a new thread.
Gratefully yours,
Charles
4 years, 4 months
v4.4.3 Node Cockpit Gluster deploy fails
by Charles Lam
Dear friends,
oVirt Node v4.4.3 is failing for me at Cockpit Gluster deployment on a 3 node setup that I have successfully deployed hyperconverged oVirt Node v4.4.2 on. The only difference between this and the v4.4.2 setup is I have taken the switch out on the storage network and directly cabled each node to each other. Routing and lookup via Hosts file is setup and ssh from host1 to itself and host2 and host3 works on both storage and management networks.
The failure is at:
TASK [gluster.infra/roles/backend_setup : Create VDO with specified size] ******
task path: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/vdo_create.yml:53
Specifically:
"vdo: ERROR - Can't open /dev/nvme0n1 exclusively. Mounted filesystem?\n"
as in:
failed: [host1.fqdn.tld] (item={'name': 'vdo_nvme0n1', 'device': '/dev/nvme0n1', 'slabsize': '2G', 'logicalsize': '1000G', 'blockmapcachesize': '128M', 'emulate512': 'off', 'writepolicy': 'auto', 'maxDiscardSize': '16M'}) => {"ansible_index_var": "index", "ansible_loop_var": "item", "changed": false, "err": "vdo: ERROR - Can't open /dev/nvme0n1 exclusively. Mounted filesystem?\n", "index": 0, "item": {"blockmapcachesize": "128M", "device": "/dev/nvme0n1", "emulate512": "off", "logicalsize": "1000G", "maxDiscardSize": "16M", "name": "vdo_nvme0n1", "slabsize": "2G", "writepolicy": "auto"}, "msg": "Creating VDO vdo_nvme0n1 failed.", "rc": 1}
I have wiped and rebuilt the array a couple times and tried different VDO sizes. pvcreate --test is successful on each drive in the array
Any assistance or pointers for further troubleshooting is greatly appreciated. Full gluster-deployment.log follows.
Respectfully,
Charles
/var/log/cockpit/ovirt-dashboard/gluster-deployment.log
---
ansible-playbook 2.9.15
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.6/site-packages/ansible
executable location = /root/../usr/bin/ansible-playbook
python version = 3.6.8 (default, Apr 16 2020, 01:36:27) [GCC 8.3.1 20191121 (Red Hat 8.3.1-5)]
Using /etc/ansible/ansible.cfg as config file
statically imported: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/lvm_exclude_filter.yml
statically imported: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/blacklist_mpath_devices.yml
statically imported: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/lvm_config.yml
statically imported: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/vdo_create.yml
statically imported: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/main-lvm.yml
statically imported: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/vg_create.yml
statically imported: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/thick_lv_create.yml
statically imported: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/thin_pool_create.yml
statically imported: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/thin_volume_create.yml
statically imported: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/cache_setup.yml
statically imported: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/vdo_create.yml
statically imported: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/main-lvm.yml
statically imported: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/vg_create.yml
statically imported: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/thick_lv_create.yml
statically imported: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/thin_pool_create.yml
statically imported: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/thin_volume_create.yml
statically imported: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/cache_setup.yml
statically imported: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/fscreate.yml
statically imported: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/mount.yml
statically imported: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/lvm_kernelparams.yml
statically imported: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/fstrim_service.yml
statically imported: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/luks_device_encrypt.yml
statically imported: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/bind_tang_server.yml
statically imported: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/regenerate_new_lvm_filter_rules.yml
statically imported: /etc/ansible/roles/gluster.features/roles/nfs_ganesha/tasks/prerequisites.yml
statically imported: /etc/ansible/roles/gluster.features/roles/nfs_ganesha/tasks/distribute_keys.yml
statically imported: /etc/ansible/roles/gluster.features/roles/nfs_ganesha/tasks/master_tasks.yml
statically imported: /etc/ansible/roles/gluster.features/roles/nfs_ganesha/tasks/enable_ganesha.yml
statically imported: /etc/ansible/roles/gluster.features/roles/nfs_ganesha/tasks/add_new_nodes.yml
statically imported: /etc/ansible/roles/gluster.features/roles/gluster_hci/tasks/prerequisites.yml
statically imported: /etc/ansible/roles/gluster.features/roles/gluster_hci/tasks/glusterd_ipv6.yml
statically imported: /etc/ansible/roles/gluster.features/roles/gluster_hci/tasks/hci_volumes.yml
statically imported: /etc/ansible/roles/gluster.features/roles/gluster_hci/tasks/ssl-setup.yml
statically imported: /etc/ansible/roles/gluster.features/roles/ctdb/tasks/setup_ctdb.yml
PLAYBOOK: hc_wizard.yml ********************************************************
1 plays in /root/../usr/share/cockpit/ovirt-dashboard/ansible/hc_wizard.yml
PLAY [Setup backend] ***********************************************************
TASK [Gathering Facts] *********************************************************
task path: /usr/share/cockpit/ovirt-dashboard/ansible/hc_wizard.yml:4
ok: [host2.fqdn.tld]
ok: [host3.fqdn.tld]
ok: [host1.fqdn.tld]
TASK [Check if valid hostnames are provided] ***********************************
task path: /usr/share/cockpit/ovirt-dashboard/ansible/hc_wizard.yml:16
changed: [host1.fqdn.tld] => (item=host1.fqdn.tld) => {"ansible_loop_var": "item", "changed": true, "cmd": ["getent", "ahosts", "host1.fqdn.tld"], "delta": "0:00:00.004720", "end": "2020-12-17 23:24:13.623486", "item": "host1.fqdn.tld", "rc": 0, "start": "2020-12-17 23:24:13.618766", "stderr": "", "stderr_lines": [], "stdout": "172.16.16.1 STREAM host1.fqdn.tld\n172.16.16.1 DGRAM \n172.16.16.1 RAW \n172.16.16.5 STREAM \n172.16.16.5 DGRAM \n172.16.16.5 RAW ", "stdout_lines": ["172.16.16.1 STREAM host1.fqdn.tld", "172.16.16.1 DGRAM ", "172.16.16.1 RAW ", "172.16.16.5 STREAM ", "172.16.16.5 DGRAM ", "172.16.16.5 RAW "]}
changed: [host1.fqdn.tld] => (item=host2.fqdn.tld) => {"ansible_loop_var": "item", "changed": true, "cmd": ["getent", "ahosts", "host2.fqdn.tld"], "delta": "0:00:00.004792", "end": "2020-12-17 23:24:13.826343", "item": "host2.fqdn.tld", "rc": 0, "start": "2020-12-17 23:24:13.821551", "stderr": "", "stderr_lines": [], "stdout": "172.16.16.6 STREAM host2.fqdn.tld\n172.16.16.6 DGRAM \n172.16.16.6 RAW \n172.16.16.9 STREAM \n172.16.16.9 DGRAM \n172.16.16.9 RAW ", "stdout_lines": ["172.16.16.6 STREAM host2.fqdn.tld", "172.16.16.6 DGRAM ", "172.16.16.6 RAW ", "172.16.16.9 STREAM ", "172.16.16.9 DGRAM ", "172.16.16.9 RAW "]}
changed: [host1.fqdn.tld] => (item=host3.fqdn.tld) => {"ansible_loop_var": "item", "changed": true, "cmd": ["getent", "ahosts", "host3.fqdn.tld"], "delta": "0:00:00.004815", "end": "2020-12-17 23:24:14.024785", "item": "host3.fqdn.tld", "rc": 0, "start": "2020-12-17 23:24:14.019970", "stderr": "", "stderr_lines": [], "stdout": "172.16.16.2 STREAM host3.fqdn.tld\n172.16.16.2 DGRAM \n172.16.16.2 RAW \n172.16.16.10 STREAM \n172.16.16.10 DGRAM \n172.16.16.10 RAW ", "stdout_lines": ["172.16.16.2 STREAM host3.fqdn.tld", "172.16.16.2 DGRAM ", "172.16.16.2 RAW ", "172.16.16.10 STREAM ", "172.16.16.10 DGRAM ", "172.16.16.10 RAW "]}
TASK [Check if provided hostnames are valid] ***********************************
task path: /usr/share/cockpit/ovirt-dashboard/ansible/hc_wizard.yml:29
ok: [host1.fqdn.tld] => {
"changed": false,
"msg": "All assertions passed"
}
ok: [host2.fqdn.tld] => {
"changed": false,
"msg": "All assertions passed"
}
ok: [host3.fqdn.tld] => {
"changed": false,
"msg": "All assertions passed"
}
TASK [Check if /var/log has enough disk space] *********************************
task path: /usr/share/cockpit/ovirt-dashboard/ansible/hc_wizard.yml:38
skipping: [host1.fqdn.tld] => {"changed": false, "skip_reason": "Conditional result was False"}
skipping: [host2.fqdn.tld] => {"changed": false, "skip_reason": "Conditional result was False"}
skipping: [host3.fqdn.tld] => {"changed": false, "skip_reason": "Conditional result was False"}
TASK [Check if the /var is greater than 15G] ***********************************
task path: /usr/share/cockpit/ovirt-dashboard/ansible/hc_wizard.yml:43
skipping: [host1.fqdn.tld] => {"changed": false, "skip_reason": "Conditional result was False"}
skipping: [host2.fqdn.tld] => {"changed": false, "skip_reason": "Conditional result was False"}
skipping: [host3.fqdn.tld] => {"changed": false, "skip_reason": "Conditional result was False"}
TASK [Check if block device is 512B] *******************************************
task path: /usr/share/cockpit/ovirt-dashboard/ansible/hc_wizard.yml:50
changed: [host2.fqdn.tld] => (item={'vgname': 'gluster_vg_nvme0n1', 'pvname': '/dev/mapper/vdo_nvme0n1'}) => {"ansible_loop_var": "item", "changed": true, "cmd": "blockdev --getss /dev/mapper/vdo_nvme0n1 | grep -Po -q \"512\" && echo true || echo false\n", "delta": "0:00:00.006210", "end": "2020-12-17 23:24:18.552824", "item": {"pvname": "/dev/mapper/vdo_nvme0n1", "vgname": "gluster_vg_nvme0n1"}, "rc": 0, "start": "2020-12-17 23:24:18.546614", "stderr": "blockdev: cannot open /dev/mapper/vdo_nvme0n1: No such file or directory", "stderr_lines": ["blockdev: cannot open /dev/mapper/vdo_nvme0n1: No such file or directory"], "stdout": "false", "stdout_lines": ["false"]}
changed: [host1.fqdn.tld] => (item={'vgname': 'gluster_vg_nvme0n1', 'pvname': '/dev/mapper/vdo_nvme0n1'}) => {"ansible_loop_var": "item", "changed": true, "cmd": "blockdev --getss /dev/mapper/vdo_nvme0n1 | grep -Po -q \"512\" && echo true || echo false\n", "delta": "0:00:00.006841", "end": "2020-12-17 23:24:18.585916", "item": {"pvname": "/dev/mapper/vdo_nvme0n1", "vgname": "gluster_vg_nvme0n1"}, "rc": 0, "start": "2020-12-17 23:24:18.579075", "stderr": "blockdev: cannot open /dev/mapper/vdo_nvme0n1: No such file or directory", "stderr_lines": ["blockdev: cannot open /dev/mapper/vdo_nvme0n1: No such file or directory"], "stdout": "false", "stdout_lines": ["false"]}
changed: [host3.fqdn.tld] => (item={'vgname': 'gluster_vg_nvme0n1', 'pvname': '/dev/mapper/vdo_nvme0n1'}) => {"ansible_loop_var": "item", "changed": true, "cmd": "blockdev --getss /dev/mapper/vdo_nvme0n1 | grep -Po -q \"512\" && echo true || echo false\n", "delta": "0:00:00.006546", "end": "2020-12-17 23:24:18.608937", "item": {"pvname": "/dev/mapper/vdo_nvme0n1", "vgname": "gluster_vg_nvme0n1"}, "rc": 0, "start": "2020-12-17 23:24:18.602391", "stderr": "blockdev: cannot open /dev/mapper/vdo_nvme0n1: No such file or directory", "stderr_lines": ["blockdev: cannot open /dev/mapper/vdo_nvme0n1: No such file or directory"], "stdout": "false", "stdout_lines": ["false"]}
changed: [host2.fqdn.tld] => (item={'vgname': 'gluster_vg_nvme2n1', 'pvname': '/dev/mapper/vdo_nvme2n1'}) => {"ansible_loop_var": "item", "changed": true, "cmd": "blockdev --getss /dev/mapper/vdo_nvme2n1 | grep -Po -q \"512\" && echo true || echo false\n", "delta": "0:00:00.006586", "end": "2020-12-17 23:24:23.488187", "item": {"pvname": "/dev/mapper/vdo_nvme2n1", "vgname": "gluster_vg_nvme2n1"}, "rc": 0, "start": "2020-12-17 23:24:23.481601", "stderr": "blockdev: cannot open /dev/mapper/vdo_nvme2n1: No such file or directory", "stderr_lines": ["blockdev: cannot open /dev/mapper/vdo_nvme2n1: No such file or directory"], "stdout": "false", "stdout_lines": ["false"]}
changed: [host3.fqdn.tld] => (item={'vgname': 'gluster_vg_nvme2n1', 'pvname': '/dev/mapper/vdo_nvme2n1'}) => {"ansible_loop_var": "item", "changed": true, "cmd": "blockdev --getss /dev/mapper/vdo_nvme2n1 | grep -Po -q \"512\" && echo true || echo false\n", "delta": "0:00:00.006504", "end": "2020-12-17 23:24:23.590860", "item": {"pvname": "/dev/mapper/vdo_nvme2n1", "vgname": "gluster_vg_nvme2n1"}, "rc": 0, "start": "2020-12-17 23:24:23.584356", "stderr": "blockdev: cannot open /dev/mapper/vdo_nvme2n1: No such file or directory", "stderr_lines": ["blockdev: cannot open /dev/mapper/vdo_nvme2n1: No such file or directory"], "stdout": "false", "stdout_lines": ["false"]}
changed: [host1.fqdn.tld] => (item={'vgname': 'gluster_vg_nvme2n1', 'pvname': '/dev/mapper/vdo_nvme2n1'}) => {"ansible_loop_var": "item", "changed": true, "cmd": "blockdev --getss /dev/mapper/vdo_nvme2n1 | grep -Po -q \"512\" && echo true || echo false\n", "delta": "0:00:00.006584", "end": "2020-12-17 23:24:23.642959", "item": {"pvname": "/dev/mapper/vdo_nvme2n1", "vgname": "gluster_vg_nvme2n1"}, "rc": 0, "start": "2020-12-17 23:24:23.636375", "stderr": "blockdev: cannot open /dev/mapper/vdo_nvme2n1: No such file or directory", "stderr_lines": ["blockdev: cannot open /dev/mapper/vdo_nvme2n1: No such file or directory"], "stdout": "false", "stdout_lines": ["false"]}
changed: [host2.fqdn.tld] => (item={'vgname': 'gluster_vg_nvme1n1', 'pvname': '/dev/mapper/vdo_nvme1n1'}) => {"ansible_loop_var": "item", "changed": true, "cmd": "blockdev --getss /dev/mapper/vdo_nvme1n1 | grep -Po -q \"512\" && echo true || echo false\n", "delta": "0:00:00.006582", "end": "2020-12-17 23:24:28.423302", "item": {"pvname": "/dev/mapper/vdo_nvme1n1", "vgname": "gluster_vg_nvme1n1"}, "rc": 0, "start": "2020-12-17 23:24:28.416720", "stderr": "blockdev: cannot open /dev/mapper/vdo_nvme1n1: No such file or directory", "stderr_lines": ["blockdev: cannot open /dev/mapper/vdo_nvme1n1: No such file or directory"], "stdout": "false", "stdout_lines": ["false"]}
changed: [host3.fqdn.tld] => (item={'vgname': 'gluster_vg_nvme1n1', 'pvname': '/dev/mapper/vdo_nvme1n1'}) => {"ansible_loop_var": "item", "changed": true, "cmd": "blockdev --getss /dev/mapper/vdo_nvme1n1 | grep -Po -q \"512\" && echo true || echo false\n", "delta": "0:00:00.006565", "end": "2020-12-17 23:24:28.551125", "item": {"pvname": "/dev/mapper/vdo_nvme1n1", "vgname": "gluster_vg_nvme1n1"}, "rc": 0, "start": "2020-12-17 23:24:28.544560", "stderr": "blockdev: cannot open /dev/mapper/vdo_nvme1n1: No such file or directory", "stderr_lines": ["blockdev: cannot open /dev/mapper/vdo_nvme1n1: No such file or directory"], "stdout": "false", "stdout_lines": ["false"]}
changed: [host1.fqdn.tld] => (item={'vgname': 'gluster_vg_nvme1n1', 'pvname': '/dev/mapper/vdo_nvme1n1'}) => {"ansible_loop_var": "item", "changed": true, "cmd": "blockdev --getss /dev/mapper/vdo_nvme1n1 | grep -Po -q \"512\" && echo true || echo false\n", "delta": "0:00:00.006553", "end": "2020-12-17 23:24:28.671853", "item": {"pvname": "/dev/mapper/vdo_nvme1n1", "vgname": "gluster_vg_nvme1n1"}, "rc": 0, "start": "2020-12-17 23:24:28.665300", "stderr": "blockdev: cannot open /dev/mapper/vdo_nvme1n1: No such file or directory", "stderr_lines": ["blockdev: cannot open /dev/mapper/vdo_nvme1n1: No such file or directory"], "stdout": "false", "stdout_lines": ["false"]}
TASK [Check if block device is 4KN] ********************************************
task path: /usr/share/cockpit/ovirt-dashboard/ansible/hc_wizard.yml:56
changed: [host2.fqdn.tld] => (item={'vgname': 'gluster_vg_nvme0n1', 'pvname': '/dev/mapper/vdo_nvme0n1'}) => {"ansible_loop_var": "item", "changed": true, "cmd": "blockdev --getss /dev/mapper/vdo_nvme0n1 | grep -Po -q \"4096\" && echo true || echo false\n", "delta": "0:00:00.006626", "end": "2020-12-17 23:24:33.742286", "item": {"pvname": "/dev/mapper/vdo_nvme0n1", "vgname": "gluster_vg_nvme0n1"}, "rc": 0, "start": "2020-12-17 23:24:33.735660", "stderr": "blockdev: cannot open /dev/mapper/vdo_nvme0n1: No such file or directory", "stderr_lines": ["blockdev: cannot open /dev/mapper/vdo_nvme0n1: No such file or directory"], "stdout": "false", "stdout_lines": ["false"]}
changed: [host1.fqdn.tld] => (item={'vgname': 'gluster_vg_nvme0n1', 'pvname': '/dev/mapper/vdo_nvme0n1'}) => {"ansible_loop_var": "item", "changed": true, "cmd": "blockdev --getss /dev/mapper/vdo_nvme0n1 | grep -Po -q \"4096\" && echo true || echo false\n", "delta": "0:00:00.006717", "end": "2020-12-17 23:24:33.755045", "item": {"pvname": "/dev/mapper/vdo_nvme0n1", "vgname": "gluster_vg_nvme0n1"}, "rc": 0, "start": "2020-12-17 23:24:33.748328", "stderr": "blockdev: cannot open /dev/mapper/vdo_nvme0n1: No such file or directory", "stderr_lines": ["blockdev: cannot open /dev/mapper/vdo_nvme0n1: No such file or directory"], "stdout": "false", "stdout_lines": ["false"]}
changed: [host3.fqdn.tld] => (item={'vgname': 'gluster_vg_nvme0n1', 'pvname': '/dev/mapper/vdo_nvme0n1'}) => {"ansible_loop_var": "item", "changed": true, "cmd": "blockdev --getss /dev/mapper/vdo_nvme0n1 | grep -Po -q \"4096\" && echo true || echo false\n", "delta": "0:00:00.006583", "end": "2020-12-17 23:24:33.798476", "item": {"pvname": "/dev/mapper/vdo_nvme0n1", "vgname": "gluster_vg_nvme0n1"}, "rc": 0, "start": "2020-12-17 23:24:33.791893", "stderr": "blockdev: cannot open /dev/mapper/vdo_nvme0n1: No such file or directory", "stderr_lines": ["blockdev: cannot open /dev/mapper/vdo_nvme0n1: No such file or directory"], "stdout": "false", "stdout_lines": ["false"]}
changed: [host2.fqdn.tld] => (item={'vgname': 'gluster_vg_nvme2n1', 'pvname': '/dev/mapper/vdo_nvme2n1'}) => {"ansible_loop_var": "item", "changed": true, "cmd": "blockdev --getss /dev/mapper/vdo_nvme2n1 | grep -Po -q \"4096\" && echo true || echo false\n", "delta": "0:00:00.006574", "end": "2020-12-17 23:24:38.704857", "item": {"pvname": "/dev/mapper/vdo_nvme2n1", "vgname": "gluster_vg_nvme2n1"}, "rc": 0, "start": "2020-12-17 23:24:38.698283", "stderr": "blockdev: cannot open /dev/mapper/vdo_nvme2n1: No such file or directory", "stderr_lines": ["blockdev: cannot open /dev/mapper/vdo_nvme2n1: No such file or directory"], "stdout": "false", "stdout_lines": ["false"]}
changed: [host1.fqdn.tld] => (item={'vgname': 'gluster_vg_nvme2n1', 'pvname': '/dev/mapper/vdo_nvme2n1'}) => {"ansible_loop_var": "item", "changed": true, "cmd": "blockdev --getss /dev/mapper/vdo_nvme2n1 | grep -Po -q \"4096\" && echo true || echo false\n", "delta": "0:00:00.006736", "end": "2020-12-17 23:24:38.743356", "item": {"pvname": "/dev/mapper/vdo_nvme2n1", "vgname": "gluster_vg_nvme2n1"}, "rc": 0, "start": "2020-12-17 23:24:38.736620", "stderr": "blockdev: cannot open /dev/mapper/vdo_nvme2n1: No such file or directory", "stderr_lines": ["blockdev: cannot open /dev/mapper/vdo_nvme2n1: No such file or directory"], "stdout": "false", "stdout_lines": ["false"]}
changed: [host3.fqdn.tld] => (item={'vgname': 'gluster_vg_nvme2n1', 'pvname': '/dev/mapper/vdo_nvme2n1'}) => {"ansible_loop_var": "item", "changed": true, "cmd": "blockdev --getss /dev/mapper/vdo_nvme2n1 | grep -Po -q \"4096\" && echo true || echo false\n", "delta": "0:00:00.006516", "end": "2020-12-17 23:24:38.762054", "item": {"pvname": "/dev/mapper/vdo_nvme2n1", "vgname": "gluster_vg_nvme2n1"}, "rc": 0, "start": "2020-12-17 23:24:38.755538", "stderr": "blockdev: cannot open /dev/mapper/vdo_nvme2n1: No such file or directory", "stderr_lines": ["blockdev: cannot open /dev/mapper/vdo_nvme2n1: No such file or directory"], "stdout": "false", "stdout_lines": ["false"]}
changed: [host2.fqdn.tld] => (item={'vgname': 'gluster_vg_nvme1n1', 'pvname': '/dev/mapper/vdo_nvme1n1'}) => {"ansible_loop_var": "item", "changed": true, "cmd": "blockdev --getss /dev/mapper/vdo_nvme1n1 | grep -Po -q \"4096\" && echo true || echo false\n", "delta": "0:00:00.006611", "end": "2020-12-17 23:24:43.634620", "item": {"pvname": "/dev/mapper/vdo_nvme1n1", "vgname": "gluster_vg_nvme1n1"}, "rc": 0, "start": "2020-12-17 23:24:43.628009", "stderr": "blockdev: cannot open /dev/mapper/vdo_nvme1n1: No such file or directory", "stderr_lines": ["blockdev: cannot open /dev/mapper/vdo_nvme1n1: No such file or directory"], "stdout": "false", "stdout_lines": ["false"]}
changed: [host1.fqdn.tld] => (item={'vgname': 'gluster_vg_nvme1n1', 'pvname': '/dev/mapper/vdo_nvme1n1'}) => {"ansible_loop_var": "item", "changed": true, "cmd": "blockdev --getss /dev/mapper/vdo_nvme1n1 | grep -Po -q \"4096\" && echo true || echo false\n", "delta": "0:00:00.006624", "end": "2020-12-17 23:24:43.741663", "item": {"pvname": "/dev/mapper/vdo_nvme1n1", "vgname": "gluster_vg_nvme1n1"}, "rc": 0, "start": "2020-12-17 23:24:43.735039", "stderr": "blockdev: cannot open /dev/mapper/vdo_nvme1n1: No such file or directory", "stderr_lines": ["blockdev: cannot open /dev/mapper/vdo_nvme1n1: No such file or directory"], "stdout": "false", "stdout_lines": ["false"]}
changed: [host3.fqdn.tld] => (item={'vgname': 'gluster_vg_nvme1n1', 'pvname': '/dev/mapper/vdo_nvme1n1'}) => {"ansible_loop_var": "item", "changed": true, "cmd": "blockdev --getss /dev/mapper/vdo_nvme1n1 | grep -Po -q \"4096\" && echo true || echo false\n", "delta": "0:00:00.006541", "end": "2020-12-17 23:24:43.758283", "item": {"pvname": "/dev/mapper/vdo_nvme1n1", "vgname": "gluster_vg_nvme1n1"}, "rc": 0, "start": "2020-12-17 23:24:43.751742", "stderr": "blockdev: cannot open /dev/mapper/vdo_nvme1n1: No such file or directory", "stderr_lines": ["blockdev: cannot open /dev/mapper/vdo_nvme1n1: No such file or directory"], "stdout": "false", "stdout_lines": ["false"]}
TASK [fail] ********************************************************************
task path: /usr/share/cockpit/ovirt-dashboard/ansible/hc_wizard.yml:62
skipping: [host1.fqdn.tld] => (item=[{'cmd': 'blockdev --getss /dev/mapper/vdo_nvme0n1 | grep -Po -q "512" && echo true || echo false\n', 'stdout': 'false', 'stderr': 'blockdev: cannot open /dev/mapper/vdo_nvme0n1: No such file or directory', 'rc': 0, 'start': '2020-12-17 23:24:18.579075', 'end': '2020-12-17 23:24:18.585916', 'delta': '0:00:00.006841', 'changed': True, 'invocation': {'module_args': {'_raw_params': 'blockdev --getss /dev/mapper/vdo_nvme0n1 | grep -Po -q "512" && echo true || echo false\n', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['false'], 'stderr_lines': ['blockdev: cannot open /dev/mapper/vdo_nvme0n1: No such file or directory'], 'failed': False, 'item': {'vgname': 'gluster_vg_nvme0n1', 'pvname': '/dev/mapper/vdo_nvme0n1'}, 'ansible_loop_var': 'item'}, {'cmd': 'blockdev --getss /dev/mapper/vdo_nvme0n1 |
grep -Po -q "4096" && echo true || echo false\n', 'stdout': 'false', 'stderr': 'blockdev: cannot open /dev/mapper/vdo_nvme0n1: No such file or directory', 'rc': 0, 'start': '2020-12-17 23:24:33.748328', 'end': '2020-12-17 23:24:33.755045', 'delta': '0:00:00.006717', 'changed': True, 'invocation': {'module_args': {'_raw_params': 'blockdev --getss /dev/mapper/vdo_nvme0n1 | grep -Po -q "4096" && echo true || echo false\n', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['false'], 'stderr_lines': ['blockdev: cannot open /dev/mapper/vdo_nvme0n1: No such file or directory'], 'failed': False, 'item': {'vgname': 'gluster_vg_nvme0n1', 'pvname': '/dev/mapper/vdo_nvme0n1'}, 'ansible_loop_var': 'item'}]) => {"ansible_loop_var": "item", "changed": false, "item": [{"ansible_loop_var": "item", "changed": true, "cmd": "blockdev --getss /dev
/mapper/vdo_nvme0n1 | grep -Po -q \"512\" && echo true || echo false\n", "delta": "0:00:00.006841", "end": "2020-12-17 23:24:18.585916", "failed": false, "invocation": {"module_args": {"_raw_params": "blockdev --getss /dev/mapper/vdo_nvme0n1 | grep -Po -q \"512\" && echo true || echo false\n", "_uses_shell": true, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "stdin_add_newline": true, "strip_empty_ends": true, "warn": true}}, "item": {"pvname": "/dev/mapper/vdo_nvme0n1", "vgname": "gluster_vg_nvme0n1"}, "rc": 0, "start": "2020-12-17 23:24:18.579075", "stderr": "blockdev: cannot open /dev/mapper/vdo_nvme0n1: No such file or directory", "stderr_lines": ["blockdev: cannot open /dev/mapper/vdo_nvme0n1: No such file or directory"], "stdout": "false", "stdout_lines": ["false"]}, {"ansible_loop_var": "item", "changed": true, "cmd": "blockdev --getss /dev/mapper/vdo_nvme0n1 | grep -Po -q \"4096\" && echo true || echo false\n", "delta":
"0:00:00.006717", "end": "2020-12-17 23:24:33.755045", "failed": false, "invocation": {"module_args": {"_raw_params": "blockdev --getss /dev/mapper/vdo_nvme0n1 | grep -Po -q \"4096\" && echo true || echo false\n", "_uses_shell": true, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "stdin_add_newline": true, "strip_empty_ends": true, "warn": true}}, "item": {"pvname": "/dev/mapper/vdo_nvme0n1", "vgname": "gluster_vg_nvme0n1"}, "rc": 0, "start": "2020-12-17 23:24:33.748328", "stderr": "blockdev: cannot open /dev/mapper/vdo_nvme0n1: No such file or directory", "stderr_lines": ["blockdev: cannot open /dev/mapper/vdo_nvme0n1: No such file or directory"], "stdout": "false", "stdout_lines": ["false"]}], "skip_reason": "Conditional result was False"}
skipping: [host1.fqdn.tld] => (item=[{'cmd': 'blockdev --getss /dev/mapper/vdo_nvme0n1 | grep -Po -q "512" && echo true || echo false\n', 'stdout': 'false', 'stderr': 'blockdev: cannot open /dev/mapper/vdo_nvme0n1: No such file or directory', 'rc': 0, 'start': '2020-12-17 23:24:18.579075', 'end': '2020-12-17 23:24:18.585916', 'delta': '0:00:00.006841', 'changed': True, 'invocation': {'module_args': {'_raw_params': 'blockdev --getss /dev/mapper/vdo_nvme0n1 | grep -Po -q "512" && echo true || echo false\n', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['false'], 'stderr_lines': ['blockdev: cannot open /dev/mapper/vdo_nvme0n1: No such file or directory'], 'failed': False, 'item': {'vgname': 'gluster_vg_nvme0n1', 'pvname': '/dev/mapper/vdo_nvme0n1'}, 'ansible_loop_var': 'item'}, {'cmd': 'blockdev --getss /dev/mapper/vdo_nvme2n1 |
grep -Po -q "4096" && echo true || echo false\n', 'stdout': 'false', 'stderr': 'blockdev: cannot open /dev/mapper/vdo_nvme2n1: No such file or directory', 'rc': 0, 'start': '2020-12-17 23:24:38.736620', 'end': '2020-12-17 23:24:38.743356', 'delta': '0:00:00.006736', 'changed': True, 'invocation': {'module_args': {'_raw_params': 'blockdev --getss /dev/mapper/vdo_nvme2n1 | grep -Po -q "4096" && echo true || echo false\n', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['false'], 'stderr_lines': ['blockdev: cannot open /dev/mapper/vdo_nvme2n1: No such file or directory'], 'failed': False, 'item': {'vgname': 'gluster_vg_nvme2n1', 'pvname': '/dev/mapper/vdo_nvme2n1'}, 'ansible_loop_var': 'item'}]) => {"ansible_loop_var": "item", "changed": false, "item": [{"ansible_loop_var": "item", "changed": true, "cmd": "blockdev --getss /dev
/mapper/vdo_nvme0n1 | grep -Po -q \"512\" && echo true || echo false\n", "delta": "0:00:00.006841", "end": "2020-12-17 23:24:18.585916", "failed": false, "invocation": {"module_args": {"_raw_params": "blockdev --getss /dev/mapper/vdo_nvme0n1 | grep -Po -q \"512\" && echo true || echo false\n", "_uses_shell": true, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "stdin_add_newline": true, "strip_empty_ends": true, "warn": true}}, "item": {"pvname": "/dev/mapper/vdo_nvme0n1", "vgname": "gluster_vg_nvme0n1"}, "rc": 0, "start": "2020-12-17 23:24:18.579075", "stderr": "blockdev: cannot open /dev/mapper/vdo_nvme0n1: No such file or directory", "stderr_lines": ["blockdev: cannot open /dev/mapper/vdo_nvme0n1: No such file or directory"], "stdout": "false", "stdout_lines": ["false"]}, {"ansible_loop_var": "item", "changed": true, "cmd": "blockdev --getss /dev/mapper/vdo_nvme2n1 | grep -Po -q \"4096\" && echo true || echo false\n", "delta":
"0:00:00.006736", "end": "2020-12-17 23:24:38.743356", "failed": false, "invocation": {"module_args": {"_raw_params": "blockdev --getss /dev/mapper/vdo_nvme2n1 | grep -Po -q \"4096\" && echo true || echo false\n", "_uses_shell": true, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "stdin_add_newline": true, "strip_empty_ends": true, "warn": true}}, "item": {"pvname": "/dev/mapper/vdo_nvme2n1", "vgname": "gluster_vg_nvme2n1"}, "rc": 0, "start": "2020-12-17 23:24:38.736620", "stderr": "blockdev: cannot open /dev/mapper/vdo_nvme2n1: No such file or directory", "stderr_lines": ["blockdev: cannot open /dev/mapper/vdo_nvme2n1: No such file or directory"], "stdout": "false", "stdout_lines": ["false"]}], "skip_reason": "Conditional result was False"}
skipping: [host1.fqdn.tld] => (item=[{'cmd': 'blockdev --getss /dev/mapper/vdo_nvme0n1 | grep -Po -q "512" && echo true || echo false\n', 'stdout': 'false', 'stderr': 'blockdev: cannot open /dev/mapper/vdo_nvme0n1: No such file or directory', 'rc': 0, 'start': '2020-12-17 23:24:18.579075', 'end': '2020-12-17 23:24:18.585916', 'delta': '0:00:00.006841', 'changed': True, 'invocation': {'module_args': {'_raw_params': 'blockdev --getss /dev/mapper/vdo_nvme0n1 | grep -Po -q "512" && echo true || echo false\n', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['false'], 'stderr_lines': ['blockdev: cannot open /dev/mapper/vdo_nvme0n1: No such file or directory'], 'failed': False, 'item': {'vgname': 'gluster_vg_nvme0n1', 'pvname': '/dev/mapper/vdo_nvme0n1'}, 'ansible_loop_var': 'item'}, {'cmd': 'blockdev --getss /dev/mapper/vdo_nvme1n1 |
grep -Po -q "4096" && echo true || echo false\n', 'stdout': 'false', 'stderr': 'blockdev: cannot open /dev/mapper/vdo_nvme1n1: No such file or directory', 'rc': 0, 'start': '2020-12-17 23:24:43.735039', 'end': '2020-12-17 23:24:43.741663', 'delta': '0:00:00.006624', 'changed': True, 'invocation': {'module_args': {'_raw_params': 'blockdev --getss /dev/mapper/vdo_nvme1n1 | grep -Po -q "4096" && echo true || echo false\n', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['false'], 'stderr_lines': ['blockdev: cannot open /dev/mapper/vdo_nvme1n1: No such file or directory'], 'failed': False, 'item': {'vgname': 'gluster_vg_nvme1n1', 'pvname': '/dev/mapper/vdo_nvme1n1'}, 'ansible_loop_var': 'item'}]) => {"ansible_loop_var": "item", "changed": false, "item": [{"ansible_loop_var": "item", "changed": true, "cmd": "blockdev --getss /dev
/mapper/vdo_nvme0n1 | grep -Po -q \"512\" && echo true || echo false\n", "delta": "0:00:00.006841", "end": "2020-12-17 23:24:18.585916", "failed": false, "invocation": {"module_args": {"_raw_params": "blockdev --getss /dev/mapper/vdo_nvme0n1 | grep -Po -q \"512\" && echo true || echo false\n", "_uses_shell": true, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "stdin_add_newline": true, "strip_empty_ends": true, "warn": true}}, "item": {"pvname": "/dev/mapper/vdo_nvme0n1", "vgname": "gluster_vg_nvme0n1"}, "rc": 0, "start": "2020-12-17 23:24:18.579075", "stderr": "blockdev: cannot open /dev/mapper/vdo_nvme0n1: No such file or directory", "stderr_lines": ["blockdev: cannot open /dev/mapper/vdo_nvme0n1: No such file or directory"], "stdout": "false", "stdout_lines": ["false"]}, {"ansible_loop_var": "item", "changed": true, "cmd": "blockdev --getss /dev/mapper/vdo_nvme1n1 | grep -Po -q \"4096\" && echo true || echo false\n", "delta":
"0:00:00.006624", "end": "2020-12-17 23:24:43.741663", "failed": false, "invocation": {"module_args": {"_raw_params": "blockdev --getss /dev/mapper/vdo_nvme1n1 | grep -Po -q \"4096\" && echo true || echo false\n", "_uses_shell": true, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "stdin_add_newline": true, "strip_empty_ends": true, "warn": true}}, "item": {"pvname": "/dev/mapper/vdo_nvme1n1", "vgname": "gluster_vg_nvme1n1"}, "rc": 0, "start": "2020-12-17 23:24:43.735039", "stderr": "blockdev: cannot open /dev/mapper/vdo_nvme1n1: No such file or directory", "stderr_lines": ["blockdev: cannot open /dev/mapper/vdo_nvme1n1: No such file or directory"], "stdout": "false", "stdout_lines": ["false"]}], "skip_reason": "Conditional result was False"}
skipping: [host1.fqdn.tld] => (item=[{'cmd': 'blockdev --getss /dev/mapper/vdo_nvme2n1 | grep -Po -q "512" && echo true || echo false\n', 'stdout': 'false', 'stderr': 'blockdev: cannot open /dev/mapper/vdo_nvme2n1: No such file or directory', 'rc': 0, 'start': '2020-12-17 23:24:23.636375', 'end': '2020-12-17 23:24:23.642959', 'delta': '0:00:00.006584', 'changed': True, 'invocation': {'module_args': {'_raw_params': 'blockdev --getss /dev/mapper/vdo_nvme2n1 | grep -Po -q "512" && echo true || echo false\n', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['false'], 'stderr_lines': ['blockdev: cannot open /dev/mapper/vdo_nvme2n1: No such file or directory'], 'failed': False, 'item': {'vgname': 'gluster_vg_nvme2n1', 'pvname': '/dev/mapper/vdo_nvme2n1'}, 'ansible_loop_var': 'item'}, {'cmd': 'blockdev --getss /dev/mapper/vdo_nvme0n1 |
grep -Po -q "4096" && echo true || echo false\n', 'stdout': 'false', 'stderr': 'blockdev: cannot open /dev/mapper/vdo_nvme0n1: No such file or directory', 'rc': 0, 'start': '2020-12-17 23:24:33.748328', 'end': '2020-12-17 23:24:33.755045', 'delta': '0:00:00.006717', 'changed': True, 'invocation': {'module_args': {'_raw_params': 'blockdev --getss /dev/mapper/vdo_nvme0n1 | grep -Po -q "4096" && echo true || echo false\n', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['false'], 'stderr_lines': ['blockdev: cannot open /dev/mapper/vdo_nvme0n1: No such file or directory'], 'failed': False, 'item': {'vgname': 'gluster_vg_nvme0n1', 'pvname': '/dev/mapper/vdo_nvme0n1'}, 'ansible_loop_var': 'item'}]) => {"ansible_loop_var": "item", "changed": false, "item": [{"ansible_loop_var": "item", "changed": true, "cmd": "blockdev --getss /dev
/mapper/vdo_nvme2n1 | grep -Po -q \"512\" && echo true || echo false\n", "delta": "0:00:00.006584", "end": "2020-12-17 23:24:23.642959", "failed": false, "invocation": {"module_args": {"_raw_params": "blockdev --getss /dev/mapper/vdo_nvme2n1 | grep -Po -q \"512\" && echo true || echo false\n", "_uses_shell": true, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "stdin_add_newline": true, "strip_empty_ends": true, "warn": true}}, "item": {"pvname": "/dev/mapper/vdo_nvme2n1", "vgname": "gluster_vg_nvme2n1"}, "rc": 0, "start": "2020-12-17 23:24:23.636375", "stderr": "blockdev: cannot open /dev/mapper/vdo_nvme2n1: No such file or directory", "stderr_lines": ["blockdev: cannot open /dev/mapper/vdo_nvme2n1: No such file or directory"], "stdout": "false", "stdout_lines": ["false"]}, {"ansible_loop_var": "item", "changed": true, "cmd": "blockdev --getss /dev/mapper/vdo_nvme0n1 | grep -Po -q \"4096\" && echo true || echo false\n", "delta":
"0:00:00.006717", "end": "2020-12-17 23:24:33.755045", "failed": false, "invocation": {"module_args": {"_raw_params": "blockdev --getss /dev/mapper/vdo_nvme0n1 | grep -Po -q \"4096\" && echo true || echo false\n", "_uses_shell": true, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "stdin_add_newline": true, "strip_empty_ends": true, "warn": true}}, "item": {"pvname": "/dev/mapper/vdo_nvme0n1", "vgname": "gluster_vg_nvme0n1"}, "rc": 0, "start": "2020-12-17 23:24:33.748328", "stderr": "blockdev: cannot open /dev/mapper/vdo_nvme0n1: No such file or directory", "stderr_lines": ["blockdev: cannot open /dev/mapper/vdo_nvme0n1: No such file or directory"], "stdout": "false", "stdout_lines": ["false"]}], "skip_reason": "Conditional result was False"}
skipping: [host1.fqdn.tld] => (item=[{'cmd': 'blockdev --getss /dev/mapper/vdo_nvme2n1 | grep -Po -q "512" && echo true || echo false\n', 'stdout': 'false', 'stderr': 'blockdev: cannot open /dev/mapper/vdo_nvme2n1: No such file or directory', 'rc': 0, 'start': '2020-12-17 23:24:23.636375', 'end': '2020-12-17 23:24:23.642959', 'delta': '0:00:00.006584', 'changed': True, 'invocation': {'module_args': {'_raw_params': 'blockdev --getss /dev/mapper/vdo_nvme2n1 | grep -Po -q "512" && echo true || echo false\n', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['false'], 'stderr_lines': ['blockdev: cannot open /dev/mapper/vdo_nvme2n1: No such file or directory'], 'failed': False, 'item': {'vgname': 'gluster_vg_nvme2n1', 'pvname': '/dev/mapper/vdo_nvme2n1'}, 'ansible_loop_var': 'item'}, {'cmd': 'blockdev --getss /dev/mapper/vdo_nvme2n1 |
grep -Po -q "4096" && echo true || echo false\n', 'stdout': 'false', 'stderr': 'blockdev: cannot open /dev/mapper/vdo_nvme2n1: No such file or directory', 'rc': 0, 'start': '2020-12-17 23:24:38.736620', 'end': '2020-12-17 23:24:38.743356', 'delta': '0:00:00.006736', 'changed': True, 'invocation': {'module_args': {'_raw_params': 'blockdev --getss /dev/mapper/vdo_nvme2n1 | grep -Po -q "4096" && echo true || echo false\n', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['false'], 'stderr_lines': ['blockdev: cannot open /dev/mapper/vdo_nvme2n1: No such file or directory'], 'failed': False, 'item': {'vgname': 'gluster_vg_nvme2n1', 'pvname': '/dev/mapper/vdo_nvme2n1'}, 'ansible_loop_var': 'item'}]) => {"ansible_loop_var": "item", "changed": false, "item": [{"ansible_loop_var": "item", "changed": true, "cmd": "blockdev --getss /dev
/mapper/vdo_nvme2n1 | grep -Po -q \"512\" && echo true || echo false\n", "delta": "0:00:00.006584", "end": "2020-12-17 23:24:23.642959", "failed": false, "invocation": {"module_args": {"_raw_params": "blockdev --getss /dev/mapper/vdo_nvme2n1 | grep -Po -q \"512\" && echo true || echo false\n", "_uses_shell": true, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "stdin_add_newline": true, "strip_empty_ends": true, "warn": true}}, "item": {"pvname": "/dev/mapper/vdo_nvme2n1", "vgname": "gluster_vg_nvme2n1"}, "rc": 0, "start": "2020-12-17 23:24:23.636375", "stderr": "blockdev: cannot open /dev/mapper/vdo_nvme2n1: No such file or directory", "stderr_lines": ["blockdev: cannot open /dev/mapper/vdo_nvme2n1: No such file or directory"], "stdout": "false", "stdout_lines": ["false"]}, {"ansible_loop_var": "item", "changed": true, "cmd": "blockdev --getss /dev/mapper/vdo_nvme2n1 | grep -Po -q \"4096\" && echo true || echo false\n", "delta":
"0:00:00.006736", "end": "2020-12-17 23:24:38.743356", "failed": false, "invocation": {"module_args": {"_raw_params": "blockdev --getss /dev/mapper/vdo_nvme2n1 | grep -Po -q \"4096\" && echo true || echo false\n", "_uses_shell": true, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "stdin_add_newline": true, "strip_empty_ends": true, "warn": true}}, "item": {"pvname": "/dev/mapper/vdo_nvme2n1", "vgname": "gluster_vg_nvme2n1"}, "rc": 0, "start": "2020-12-17 23:24:38.736620", "stderr": "blockdev: cannot open /dev/mapper/vdo_nvme2n1: No such file or directory", "stderr_lines": ["blockdev: cannot open /dev/mapper/vdo_nvme2n1: No such file or directory"], "stdout": "false", "stdout_lines": ["false"]}], "skip_reason": "Conditional result was False"}
skipping: [host1.fqdn.tld] => (item=[{'cmd': 'blockdev --getss /dev/mapper/vdo_nvme2n1 | grep -Po -q "512" && echo true || echo false\n', 'stdout': 'false', 'stderr': 'blockdev: cannot open /dev/mapper/vdo_nvme2n1: No such file or directory', 'rc': 0, 'start': '2020-12-17 23:24:23.636375', 'end': '2020-12-17 23:24:23.642959', 'delta': '0:00:00.006584', 'changed': True, 'invocation': {'module_args': {'_raw_params': 'blockdev --getss /dev/mapper/vdo_nvme2n1 | grep -Po -q "512" && echo true || echo false\n', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['false'], 'stderr_lines': ['blockdev: cannot open /dev/mapper/vdo_nvme2n1: No such file or directory'], 'failed': False, 'item': {'vgname': 'gluster_vg_nvme2n1', 'pvname': '/dev/mapper/vdo_nvme2n1'}, 'ansible_loop_var': 'item'}, {'cmd': 'blockdev --getss /dev/mapper/vdo_nvme1n1 |
grep -Po -q "4096" && echo true || echo false\n', 'stdout': 'false', 'stderr': 'blockdev: cannot open /dev/mapper/vdo_nvme1n1: No such file or directory', 'rc': 0, 'start': '2020-12-17 23:24:43.735039', 'end': '2020-12-17 23:24:43.741663', 'delta': '0:00:00.006624', 'changed': True, 'invocation': {'module_args': {'_raw_params': 'blockdev --getss /dev/mapper/vdo_nvme1n1 | grep -Po -q "4096" && echo true || echo false\n', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['false'], 'stderr_lines': ['blockdev: cannot open /dev/mapper/vdo_nvme1n1: No such file or directory'], 'failed': False, 'item': {'vgname': 'gluster_vg_nvme1n1', 'pvname': '/dev/mapper/vdo_nvme1n1'}, 'ansible_loop_var': 'item'}]) => {"ansible_loop_var": "item", "changed": false, "item": [{"ansible_loop_var": "item", "changed": true, "cmd": "blockdev --getss /dev
/mapper/vdo_nvme2n1 | grep -Po -q \"512\" && echo true || echo false\n", "delta": "0:00:00.006584", "end": "2020-12-17 23:24:23.642959", "failed": false, "invocation": {"module_args": {"_raw_params": "blockdev --getss /dev/mapper/vdo_nvme2n1 | grep -Po -q \"512\" && echo true || echo false\n", "_uses_shell": true, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "stdin_add_newline": true, "strip_empty_ends": true, "warn": true}}, "item": {"pvname": "/dev/mapper/vdo_nvme2n1", "vgname": "gluster_vg_nvme2n1"}, "rc": 0, "start": "2020-12-17 23:24:23.636375", "stderr": "blockdev: cannot open /dev/mapper/vdo_nvme2n1: No such file or directory", "stderr_lines": ["blockdev: cannot open /dev/mapper/vdo_nvme2n1: No such file or directory"], "stdout": "false", "stdout_lines": ["false"]}, {"ansible_loop_var": "item", "changed": true, "cmd": "blockdev --getss /dev/mapper/vdo_nvme1n1 | grep -Po -q \"4096\" && echo true || echo false\n", "delta":
"0:00:00.006624", "end": "2020-12-17 23:24:43.741663", "failed": false, "invocation": {"module_args": {"_raw_params": "blockdev --getss /dev/mapper/vdo_nvme1n1 | grep -Po -q \"4096\" && echo true || echo false\n", "_uses_shell": true, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "stdin_add_newline": true, "strip_empty_ends": true, "warn": true}}, "item": {"pvname": "/dev/mapper/vdo_nvme1n1", "vgname": "gluster_vg_nvme1n1"}, "rc": 0, "start": "2020-12-17 23:24:43.735039", "stderr": "blockdev: cannot open /dev/mapper/vdo_nvme1n1: No such file or directory", "stderr_lines": ["blockdev: cannot open /dev/mapper/vdo_nvme1n1: No such file or directory"], "stdout": "false", "stdout_lines": ["false"]}], "skip_reason": "Conditional result was False"}
skipping: [host2.fqdn.tld] => (item=[{'cmd': 'blockdev --getss /dev/mapper/vdo_nvme0n1 | grep -Po -q "512" && echo true || echo false\n', 'stdout': 'false', 'stderr': 'blockdev: cannot open /dev/mapper/vdo_nvme0n1: No such file or directory', 'rc': 0, 'start': '2020-12-17 23:24:18.546614', 'end': '2020-12-17 23:24:18.552824', 'delta': '0:00:00.006210', 'changed': True, 'invocation': {'module_args': {'_raw_params': 'blockdev --getss /dev/mapper/vdo_nvme0n1 | grep -Po -q "512" && echo true || echo false\n', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['false'], 'stderr_lines': ['blockdev: cannot open /dev/mapper/vdo_nvme0n1: No such file or directory'], 'failed': False, 'item': {'vgname': 'gluster_vg_nvme0n1', 'pvname': '/dev/mapper/vdo_nvme0n1'}, 'ansible_loop_var': 'item'}, {'cmd': 'blockdev --getss /dev/mapper/vdo_nvme0n1 |
grep -Po -q "4096" && echo true || echo false\n', 'stdout': 'false', 'stderr': 'blockdev: cannot open /dev/mapper/vdo_nvme0n1: No such file or directory', 'rc': 0, 'start': '2020-12-17 23:24:33.735660', 'end': '2020-12-17 23:24:33.742286', 'delta': '0:00:00.006626', 'changed': True, 'invocation': {'module_args': {'_raw_params': 'blockdev --getss /dev/mapper/vdo_nvme0n1 | grep -Po -q "4096" && echo true || echo false\n', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['false'], 'stderr_lines': ['blockdev: cannot open /dev/mapper/vdo_nvme0n1: No such file or directory'], 'failed': False, 'item': {'vgname': 'gluster_vg_nvme0n1', 'pvname': '/dev/mapper/vdo_nvme0n1'}, 'ansible_loop_var': 'item'}]) => {"ansible_loop_var": "item", "changed": false, "item": [{"ansible_loop_var": "item", "changed": true, "cmd": "blockdev --getss /dev
/mapper/vdo_nvme0n1 | grep -Po -q \"512\" && echo true || echo false\n", "delta": "0:00:00.006210", "end": "2020-12-17 23:24:18.552824", "failed": false, "invocation": {"module_args": {"_raw_params": "blockdev --getss /dev/mapper/vdo_nvme0n1 | grep -Po -q \"512\" && echo true || echo false\n", "_uses_shell": true, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "stdin_add_newline": true, "strip_empty_ends": true, "warn": true}}, "item": {"pvname": "/dev/mapper/vdo_nvme0n1", "vgname": "gluster_vg_nvme0n1"}, "rc": 0, "start": "2020-12-17 23:24:18.546614", "stderr": "blockdev: cannot open /dev/mapper/vdo_nvme0n1: No such file or directory", "stderr_lines": ["blockdev: cannot open /dev/mapper/vdo_nvme0n1: No such file or directory"], "stdout": "false", "stdout_lines": ["false"]}, {"ansible_loop_var": "item", "changed": true, "cmd": "blockdev --getss /dev/mapper/vdo_nvme0n1 | grep -Po -q \"4096\" && echo true || echo false\n", "delta":
"0:00:00.006626", "end": "2020-12-17 23:24:33.742286", "failed": false, "invocation": {"module_args": {"_raw_params": "blockdev --getss /dev/mapper/vdo_nvme0n1 | grep -Po -q \"4096\" && echo true || echo false\n", "_uses_shell": true, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "stdin_add_newline": true, "strip_empty_ends": true, "warn": true}}, "item": {"pvname": "/dev/mapper/vdo_nvme0n1", "vgname": "gluster_vg_nvme0n1"}, "rc": 0, "start": "2020-12-17 23:24:33.735660", "stderr": "blockdev: cannot open /dev/mapper/vdo_nvme0n1: No such file or directory", "stderr_lines": ["blockdev: cannot open /dev/mapper/vdo_nvme0n1: No such file or directory"], "stdout": "false", "stdout_lines": ["false"]}], "skip_reason": "Conditional result was False"}
skipping: [host1.fqdn.tld] => (item=[{'cmd': 'blockdev --getss /dev/mapper/vdo_nvme1n1 | grep -Po -q "512" && echo true || echo false\n', 'stdout': 'false', 'stderr': 'blockdev: cannot open /dev/mapper/vdo_nvme1n1: No such file or directory', 'rc': 0, 'start': '2020-12-17 23:24:28.665300', 'end': '2020-12-17 23:24:28.671853', 'delta': '0:00:00.006553', 'changed': True, 'invocation': {'module_args': {'_raw_params': 'blockdev --getss /dev/mapper/vdo_nvme1n1 | grep -Po -q "512" && echo true || echo false\n', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['false'], 'stderr_lines': ['blockdev: cannot open /dev/mapper/vdo_nvme1n1: No such file or directory'], 'failed': False, 'item': {'vgname': 'gluster_vg_nvme1n1', 'pvname': '/dev/mapper/vdo_nvme1n1'}, 'ansible_loop_var': 'item'}, {'cmd': 'blockdev --getss /dev/mapper/vdo_nvme0n1 |
grep -Po -q "4096" && echo true || echo false\n', 'stdout': 'false', 'stderr': 'blockdev: cannot open /dev/mapper/vdo_nvme0n1: No such file or directory', 'rc': 0, 'start': '2020-12-17 23:24:33.748328', 'end': '2020-12-17 23:24:33.755045', 'delta': '0:00:00.006717', 'changed': True, 'invocation': {'module_args': {'_raw_params': 'blockdev --getss /dev/mapper/vdo_nvme0n1 | grep -Po -q "4096" && echo true || echo false\n', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['false'], 'stderr_lines': ['blockdev: cannot open /dev/mapper/vdo_nvme0n1: No such file or directory'], 'failed': False, 'item': {'vgname': 'gluster_vg_nvme0n1', 'pvname': '/dev/mapper/vdo_nvme0n1'}, 'ansible_loop_var': 'item'}]) => {"ansible_loop_var": "item", "changed": false, "item": [{"ansible_loop_var": "item", "changed": true, "cmd": "blockdev --getss /dev
/mapper/vdo_nvme1n1 | grep -Po -q \"512\" && echo true || echo false\n", "delta": "0:00:00.006553", "end": "2020-12-17 23:24:28.671853", "failed": false, "invocation": {"module_args": {"_raw_params": "blockdev --getss /dev/mapper/vdo_nvme1n1 | grep -Po -q \"512\" && echo true || echo false\n", "_uses_shell": true, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "stdin_add_newline": true, "strip_empty_ends": true, "warn": true}}, "item": {"pvname": "/dev/mapper/vdo_nvme1n1", "vgname": "gluster_vg_nvme1n1"}, "rc": 0, "start": "2020-12-17 23:24:28.665300", "stderr": "blockdev: cannot open /dev/mapper/vdo_nvme1n1: No such file or directory", "stderr_lines": ["blockdev: cannot open /dev/mapper/vdo_nvme1n1: No such file or directory"], "stdout": "false", "stdout_lines": ["false"]}, {"ansible_loop_var": "item", "changed": true, "cmd": "blockdev --getss /dev/mapper/vdo_nvme0n1 | grep -Po -q \"4096\" && echo true || echo false\n", "delta":
"0:00:00.006717", "end": "2020-12-17 23:24:33.755045", "failed": false, "invocation": {"module_args": {"_raw_params": "blockdev --getss /dev/mapper/vdo_nvme0n1 | grep -Po -q \"4096\" && echo true || echo false\n", "_uses_shell": true, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "stdin_add_newline": true, "strip_empty_ends": true, "warn": true}}, "item": {"pvname": "/dev/mapper/vdo_nvme0n1", "vgname": "gluster_vg_nvme0n1"}, "rc": 0, "start": "2020-12-17 23:24:33.748328", "stderr": "blockdev: cannot open /dev/mapper/vdo_nvme0n1: No such file or directory", "stderr_lines": ["blockdev: cannot open /dev/mapper/vdo_nvme0n1: No such file or directory"], "stdout": "false", "stdout_lines": ["false"]}], "skip_reason": "Conditional result was False"}
skipping: [host2.fqdn.tld] => (item=[{'cmd': 'blockdev --getss /dev/mapper/vdo_nvme0n1 | grep -Po -q "512" && echo true || echo false\n', 'stdout': 'false', 'stderr': 'blockdev: cannot open /dev/mapper/vdo_nvme0n1: No such file or directory', 'rc': 0, 'start': '2020-12-17 23:24:18.546614', 'end': '2020-12-17 23:24:18.552824', 'delta': '0:00:00.006210', 'changed': True, 'invocation': {'module_args': {'_raw_params': 'blockdev --getss /dev/mapper/vdo_nvme0n1 | grep -Po -q "512" && echo true || echo false\n', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['false'], 'stderr_lines': ['blockdev: cannot open /dev/mapper/vdo_nvme0n1: No such file or directory'], 'failed': False, 'item': {'vgname': 'gluster_vg_nvme0n1', 'pvname': '/dev/mapper/vdo_nvme0n1'}, 'ansible_loop_var': 'item'}, {'cmd': 'blockdev --getss /dev/mapper/vdo_nvme2n1 |
grep -Po -q "4096" && echo true || echo false\n', 'stdout': 'false', 'stderr': 'blockdev: cannot open /dev/mapper/vdo_nvme2n1: No such file or directory', 'rc': 0, 'start': '2020-12-17 23:24:38.698283', 'end': '2020-12-17 23:24:38.704857', 'delta': '0:00:00.006574', 'changed': True, 'invocation': {'module_args': {'_raw_params': 'blockdev --getss /dev/mapper/vdo_nvme2n1 | grep -Po -q "4096" && echo true || echo false\n', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['false'], 'stderr_lines': ['blockdev: cannot open /dev/mapper/vdo_nvme2n1: No such file or directory'], 'failed': False, 'item': {'vgname': 'gluster_vg_nvme2n1', 'pvname': '/dev/mapper/vdo_nvme2n1'}, 'ansible_loop_var': 'item'}]) => {"ansible_loop_var": "item", "changed": false, "item": [{"ansible_loop_var": "item", "changed": true, "cmd": "blockdev --getss /dev
/mapper/vdo_nvme0n1 | grep -Po -q \"512\" && echo true || echo false\n", "delta": "0:00:00.006210", "end": "2020-12-17 23:24:18.552824", "failed": false, "invocation": {"module_args": {"_raw_params": "blockdev --getss /dev/mapper/vdo_nvme0n1 | grep -Po -q \"512\" && echo true || echo false\n", "_uses_shell": true, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "stdin_add_newline": true, "strip_empty_ends": true, "warn": true}}, "item": {"pvname": "/dev/mapper/vdo_nvme0n1", "vgname": "gluster_vg_nvme0n1"}, "rc": 0, "start": "2020-12-17 23:24:18.546614", "stderr": "blockdev: cannot open /dev/mapper/vdo_nvme0n1: No such file or directory", "stderr_lines": ["blockdev: cannot open /dev/mapper/vdo_nvme0n1: No such file or directory"], "stdout": "false", "stdout_lines": ["false"]}, {"ansible_loop_var": "item", "changed": true, "cmd": "blockdev --getss /dev/mapper/vdo_nvme2n1 | grep -Po -q \"4096\" && echo true || echo false\n", "delta":
"0:00:00.006574", "end": "2020-12-17 23:24:38.704857", "failed": false, "invocation": {"module_args": {"_raw_params": "blockdev --getss /dev/mapper/vdo_nvme2n1 | grep -Po -q \"4096\" && echo true || echo false\n", "_uses_shell": true, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "stdin_add_newline": true, "strip_empty_ends": true, "warn": true}}, "item": {"pvname": "/dev/mapper/vdo_nvme2n1", "vgname": "gluster_vg_nvme2n1"}, "rc": 0, "start": "2020-12-17 23:24:38.698283", "stderr": "blockdev: cannot open /dev/mapper/vdo_nvme2n1: No such file or directory", "stderr_lines": ["blockdev: cannot open /dev/mapper/vdo_nvme2n1: No such file or directory"], "stdout": "false", "stdout_lines": ["false"]}], "skip_reason": "Conditional result was False"}
skipping: [host1.fqdn.tld] => (item=[{'cmd': 'blockdev --getss /dev/mapper/vdo_nvme1n1 | grep -Po -q "512" && echo true || echo false\n', 'stdout': 'false', 'stderr': 'blockdev: cannot open /dev/mapper/vdo_nvme1n1: No such file or directory', 'rc': 0, 'start': '2020-12-17 23:24:28.665300', 'end': '2020-12-17 23:24:28.671853', 'delta': '0:00:00.006553', 'changed': True, 'invocation': {'module_args': {'_raw_params': 'blockdev --getss /dev/mapper/vdo_nvme1n1 | grep -Po -q "512" && echo true || echo false\n', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['false'], 'stderr_lines': ['blockdev: cannot open /dev/mapper/vdo_nvme1n1: No such file or directory'], 'failed': False, 'item': {'vgname': 'gluster_vg_nvme1n1', 'pvname': '/dev/mapper/vdo_nvme1n1'}, 'ansible_loop_var': 'item'}, {'cmd': 'blockdev --getss /dev/mapper/vdo_nvme2n1 |
grep -Po -q "4096" && echo true || echo false\n', 'stdout': 'false', 'stderr': 'blockdev: cannot open /dev/mapper/vdo_nvme2n1: No such file or directory', 'rc': 0, 'start': '2020-12-17 23:24:38.736620', 'end': '2020-12-17 23:24:38.743356', 'delta': '0:00:00.006736', 'changed': True, 'invocation': {'module_args': {'_raw_params': 'blockdev --getss /dev/mapper/vdo_nvme2n1 | grep -Po -q "4096" && echo true || echo false\n', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['false'], 'stderr_lines': ['blockdev: cannot open /dev/mapper/vdo_nvme2n1: No such file or directory'], 'failed': False, 'item': {'vgname': 'gluster_vg_nvme2n1', 'pvname': '/dev/mapper/vdo_nvme2n1'}, 'ansible_loop_var': 'item'}]) => {"ansible_loop_var": "item", "changed": false, "item": [{"ansible_loop_var": "item", "changed": true, "cmd": "blockdev --getss /dev
/mapper/vdo_nvme1n1 | grep -Po -q \"512\" && echo true || echo false\n", "delta": "0:00:00.006553", "end": "2020-12-17 23:24:28.671853", "failed": false, "invocation": {"module_args": {"_raw_params": "blockdev --getss /dev/mapper/vdo_nvme1n1 | grep -Po -q \"512\" && echo true || echo false\n", "_uses_shell": true, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "stdin_add_newline": true, "strip_empty_ends": true, "warn": true}}, "item": {"pvname": "/dev/mapper/vdo_nvme1n1", "vgname": "gluster_vg_nvme1n1"}, "rc": 0, "start": "2020-12-17 23:24:28.665300", "stderr": "blockdev: cannot open /dev/mapper/vdo_nvme1n1: No such file or directory", "stderr_lines": ["blockdev: cannot open /dev/mapper/vdo_nvme1n1: No such file or directory"], "stdout": "false", "stdout_lines": ["false"]}, {"ansible_loop_var": "item", "changed": true, "cmd": "blockdev --getss /dev/mapper/vdo_nvme2n1 | grep -Po -q \"4096\" && echo true || echo false\n", "delta":
"0:00:00.006736", "end": "2020-12-17 23:24:38.743356", "failed": false, "invocation": {"module_args": {"_raw_params": "blockdev --getss /dev/mapper/vdo_nvme2n1 | grep -Po -q \"4096\" && echo true || echo false\n", "_uses_shell": true, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "stdin_add_newline": true, "strip_empty_ends": true, "warn": true}}, "item": {"pvname": "/dev/mapper/vdo_nvme2n1", "vgname": "gluster_vg_nvme2n1"}, "rc": 0, "start": "2020-12-17 23:24:38.736620", "stderr": "blockdev: cannot open /dev/mapper/vdo_nvme2n1: No such file or directory", "stderr_lines": ["blockdev: cannot open /dev/mapper/vdo_nvme2n1: No such file or directory"], "stdout": "false", "stdout_lines": ["false"]}], "skip_reason": "Conditional result was False"}
skipping: [host2.fqdn.tld] => (item=[{'cmd': 'blockdev --getss /dev/mapper/vdo_nvme0n1 | grep -Po -q "512" && echo true || echo false\n', 'stdout': 'false', 'stderr': 'blockdev: cannot open /dev/mapper/vdo_nvme0n1: No such file or directory', 'rc': 0, 'start': '2020-12-17 23:24:18.546614', 'end': '2020-12-17 23:24:18.552824', 'delta': '0:00:00.006210', 'changed': True, 'invocation': {'module_args': {'_raw_params': 'blockdev --getss /dev/mapper/vdo_nvme0n1 | grep -Po -q "512" && echo true || echo false\n', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['false'], 'stderr_lines': ['blockdev: cannot open /dev/mapper/vdo_nvme0n1: No such file or directory'], 'failed': False, 'item': {'vgname': 'gluster_vg_nvme0n1', 'pvname': '/dev/mapper/vdo_nvme0n1'}, 'ansible_loop_var': 'item'}, {'cmd': 'blockdev --getss /dev/mapper/vdo_nvme1n1 |
grep -Po -q "4096" && echo true || echo false\n', 'stdout': 'false', 'stderr': 'blockdev: cannot open /dev/mapper/vdo_nvme1n1: No such file or directory', 'rc': 0, 'start': '2020-12-17 23:24:43.628009', 'end': '2020-12-17 23:24:43.634620', 'delta': '0:00:00.006611', 'changed': True, 'invocation': {'module_args': {'_raw_params': 'blockdev --getss /dev/mapper/vdo_nvme1n1 | grep -Po -q "4096" && echo true || echo false\n', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['false'], 'stderr_lines': ['blockdev: cannot open /dev/mapper/vdo_nvme1n1: No such file or directory'], 'failed': False, 'item': {'vgname': 'gluster_vg_nvme1n1', 'pvname': '/dev/mapper/vdo_nvme1n1'}, 'ansible_loop_var': 'item'}]) => {"ansible_loop_var": "item", "changed": false, "item": [{"ansible_loop_var": "item", "changed": true, "cmd": "blockdev --getss /dev
/mapper/vdo_nvme0n1 | grep -Po -q \"512\" && echo true || echo false\n", "delta": "0:00:00.006210", "end": "2020-12-17 23:24:18.552824", "failed": false, "invocation": {"module_args": {"_raw_params": "blockdev --getss /dev/mapper/vdo_nvme0n1 | grep -Po -q \"512\" && echo true || echo false\n", "_uses_shell": true, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "stdin_add_newline": true, "strip_empty_ends": true, "warn": true}}, "item": {"pvname": "/dev/mapper/vdo_nvme0n1", "vgname": "gluster_vg_nvme0n1"}, "rc": 0, "start": "2020-12-17 23:24:18.546614", "stderr": "blockdev: cannot open /dev/mapper/vdo_nvme0n1: No such file or directory", "stderr_lines": ["blockdev: cannot open /dev/mapper/vdo_nvme0n1: No such file or directory"], "stdout": "false", "stdout_lines": ["false"]}, {"ansible_loop_var": "item", "changed": true, "cmd": "blockdev --getss /dev/mapper/vdo_nvme1n1 | grep -Po -q \"4096\" && echo true || echo false\n", "delta":
"0:00:00.006611", "end": "2020-12-17 23:24:43.634620", "failed": false, "invocation": {"module_args": {"_raw_params": "blockdev --getss /dev/mapper/vdo_nvme1n1 | grep -Po -q \"4096\" && echo true || echo false\n", "_uses_shell": true, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "stdin_add_newline": true, "strip_empty_ends": true, "warn": true}}, "item": {"pvname": "/dev/mapper/vdo_nvme1n1", "vgname": "gluster_vg_nvme1n1"}, "rc": 0, "start": "2020-12-17 23:24:43.628009", "stderr": "blockdev: cannot open /dev/mapper/vdo_nvme1n1: No such file or directory", "stderr_lines": ["blockdev: cannot open /dev/mapper/vdo_nvme1n1: No such file or directory"], "stdout": "false", "stdout_lines": ["false"]}], "skip_reason": "Conditional result was False"}
skipping: [host1.fqdn.tld] => (item=[{'cmd': 'blockdev --getss /dev/mapper/vdo_nvme1n1 | grep -Po -q "512" && echo true || echo false\n', 'stdout': 'false', 'stderr': 'blockdev: cannot open /dev/mapper/vdo_nvme1n1: No such file or directory', 'rc': 0, 'start': '2020-12-17 23:24:28.665300', 'end': '2020-12-17 23:24:28.671853', 'delta': '0:00:00.006553', 'changed': True, 'invocation': {'module_args': {'_raw_params': 'blockdev --getss /dev/mapper/vdo_nvme1n1 | grep -Po -q "512" && echo true || echo false\n', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['false'], 'stderr_lines': ['blockdev: cannot open /dev/mapper/vdo_nvme1n1: No such file or directory'], 'failed': False, 'item': {'vgname': 'gluster_vg_nvme1n1', 'pvname': '/dev/mapper/vdo_nvme1n1'}, 'ansible_loop_var': 'item'}, {'cmd': 'blockdev --getss /dev/mapper/vdo_nvme1n1 |
grep -Po -q "4096" && echo true || echo false\n', 'stdout': 'false', 'stderr': 'blockdev: cannot open /dev/mapper/vdo_nvme1n1: No such file or directory', 'rc': 0, 'start': '2020-12-17 23:24:43.735039', 'end': '2020-12-17 23:24:43.741663', 'delta': '0:00:00.006624', 'changed': True, 'invocation': {'module_args': {'_raw_params': 'blockdev --getss /dev/mapper/vdo_nvme1n1 | grep -Po -q "4096" && echo true || echo false\n', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['false'], 'stderr_lines': ['blockdev: cannot open /dev/mapper/vdo_nvme1n1: No such file or directory'], 'failed': False, 'item': {'vgname': 'gluster_vg_nvme1n1', 'pvname': '/dev/mapper/vdo_nvme1n1'}, 'ansible_loop_var': 'item'}]) => {"ansible_loop_var": "item", "changed": false, "item": [{"ansible_loop_var": "item", "changed": true, "cmd": "blockdev --getss /dev
/mapper/vdo_nvme1n1 | grep -Po -q \"512\" && echo true || echo false\n", "delta": "0:00:00.006553", "end": "2020-12-17 23:24:28.671853", "failed": false, "invocation": {"module_args": {"_raw_params": "blockdev --getss /dev/mapper/vdo_nvme1n1 | grep -Po -q \"512\" && echo true || echo false\n", "_uses_shell": true, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "stdin_add_newline": true, "strip_empty_ends": true, "warn": true}}, "item": {"pvname": "/dev/mapper/vdo_nvme1n1", "vgname": "gluster_vg_nvme1n1"}, "rc": 0, "start": "2020-12-17 23:24:28.665300", "stderr": "blockdev: cannot open /dev/mapper/vdo_nvme1n1: No such file or directory", "stderr_lines": ["blockdev: cannot open /dev/mapper/vdo_nvme1n1: No such file or directory"], "stdout": "false", "stdout_lines": ["false"]}, {"ansible_loop_var": "item", "changed": true, "cmd": "blockdev --getss /dev/mapper/vdo_nvme1n1 | grep -Po -q \"4096\" && echo true || echo false\n", "delta":
"0:00:00.006624", "end": "2020-12-17 23:24:43.741663", "failed": false, "invocation": {"module_args": {"_raw_params": "blockdev --getss /dev/mapper/vdo_nvme1n1 | grep -Po -q \"4096\" && echo true || echo false\n", "_uses_shell": true, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "stdin_add_newline": true, "strip_empty_ends": true, "warn": true}}, "item": {"pvname": "/dev/mapper/vdo_nvme1n1", "vgname": "gluster_vg_nvme1n1"}, "rc": 0, "start": "2020-12-17 23:24:43.735039", "stderr": "blockdev: cannot open /dev/mapper/vdo_nvme1n1: No such file or directory", "stderr_lines": ["blockdev: cannot open /dev/mapper/vdo_nvme1n1: No such file or directory"], "stdout": "false", "stdout_lines": ["false"]}], "skip_reason": "Conditional result was False"}
skipping: [host2.fqdn.tld] => (item=[{'cmd': 'blockdev --getss /dev/mapper/vdo_nvme2n1 | grep -Po -q "512" && echo true || echo false\n', 'stdout': 'false', 'stderr': 'blockdev: cannot open /dev/mapper/vdo_nvme2n1: No such file or directory', 'rc': 0, 'start': '2020-12-17 23:24:23.481601', 'end': '2020-12-17 23:24:23.488187', 'delta': '0:00:00.006586', 'changed': True, 'invocation': {'module_args': {'_raw_params': 'blockdev --getss /dev/mapper/vdo_nvme2n1 | grep -Po -q "512" && echo true || echo false\n', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['false'], 'stderr_lines': ['blockdev: cannot open /dev/mapper/vdo_nvme2n1: No such file or directory'], 'failed': False, 'item': {'vgname': 'gluster_vg_nvme2n1', 'pvname': '/dev/mapper/vdo_nvme2n1'}, 'ansible_loop_var': 'item'}, {'cmd': 'blockdev --getss /dev/mapper/vdo_nvme0n1 |
grep -Po -q "4096" && echo true || echo false\n', 'stdout': 'false', 'stderr': 'blockdev: cannot open /dev/mapper/vdo_nvme0n1: No such file or directory', 'rc': 0, 'start': '2020-12-17 23:24:33.735660', 'end': '2020-12-17 23:24:33.742286', 'delta': '0:00:00.006626', 'changed': True, 'invocation': {'module_args': {'_raw_params': 'blockdev --getss /dev/mapper/vdo_nvme0n1 | grep -Po -q "4096" && echo true || echo false\n', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['false'], 'stderr_lines': ['blockdev: cannot open /dev/mapper/vdo_nvme0n1: No such file or directory'], 'failed': False, 'item': {'vgname': 'gluster_vg_nvme0n1', 'pvname': '/dev/mapper/vdo_nvme0n1'}, 'ansible_loop_var': 'item'}]) => {"ansible_loop_var": "item", "changed": false, "item": [{"ansible_loop_var": "item", "changed": true, "cmd": "blockdev --getss /dev
/mapper/vdo_nvme2n1 | grep -Po -q \"512\" && echo true || echo false\n", "delta": "0:00:00.006586", "end": "2020-12-17 23:24:23.488187", "failed": false, "invocation": {"module_args": {"_raw_params": "blockdev --getss /dev/mapper/vdo_nvme2n1 | grep -Po -q \"512\" && echo true || echo false\n", "_uses_shell": true, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "stdin_add_newline": true, "strip_empty_ends": true, "warn": true}}, "item": {"pvname": "/dev/mapper/vdo_nvme2n1", "vgname": "gluster_vg_nvme2n1"}, "rc": 0, "start": "2020-12-17 23:24:23.481601", "stderr": "blockdev: cannot open /dev/mapper/vdo_nvme2n1: No such file or directory", "stderr_lines": ["blockdev: cannot open /dev/mapper/vdo_nvme2n1: No such file or directory"], "stdout": "false", "stdout_lines": ["false"]}, {"ansible_loop_var": "item", "changed": true, "cmd": "blockdev --getss /dev/mapper/vdo_nvme0n1 | grep -Po -q \"4096\" && echo true || echo false\n", "delta":
"0:00:00.006626", "end": "2020-12-17 23:24:33.742286", "failed": false, "invocation": {"module_args": {"_raw_params": "blockdev --getss /dev/mapper/vdo_nvme0n1 | grep -Po -q \"4096\" && echo true || echo false\n", "_uses_shell": true, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "stdin_add_newline": true, "strip_empty_ends": true, "warn": true}}, "item": {"pvname": "/dev/mapper/vdo_nvme0n1", "vgname": "gluster_vg_nvme0n1"}, "rc": 0, "start": "2020-12-17 23:24:33.735660", "stderr": "blockdev: cannot open /dev/mapper/vdo_nvme0n1: No such file or directory", "stderr_lines": ["blockdev: cannot open /dev/mapper/vdo_nvme0n1: No such file or directory"], "stdout": "false", "stdout_lines": ["false"]}], "skip_reason": "Conditional result was False"}
skipping: [host2.fqdn.tld] => (item=[{'cmd': 'blockdev --getss /dev/mapper/vdo_nvme2n1 | grep -Po -q "512" && echo true || echo false\n', 'stdout': 'false', 'stderr': 'blockdev: cannot open /dev/mapper/vdo_nvme2n1: No such file or directory', 'rc': 0, 'start': '2020-12-17 23:24:23.481601', 'end': '2020-12-17 23:24:23.488187', 'delta': '0:00:00.006586', 'changed': True, 'invocation': {'module_args': {'_raw_params': 'blockdev --getss /dev/mapper/vdo_nvme2n1 | grep -Po -q "512" && echo true || echo false\n', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['false'], 'stderr_lines': ['blockdev: cannot open /dev/mapper/vdo_nvme2n1: No such file or directory'], 'failed': False, 'item': {'vgname': 'gluster_vg_nvme2n1', 'pvname': '/dev/mapper/vdo_nvme2n1'}, 'ansible_loop_var': 'item'}, {'cmd': 'blockdev --getss /dev/mapper/vdo_nvme2n1 |
grep -Po -q "4096" && echo true || echo false\n', 'stdout': 'false', 'stderr': 'blockdev: cannot open /dev/mapper/vdo_nvme2n1: No such file or directory', 'rc': 0, 'start': '2020-12-17 23:24:38.698283', 'end': '2020-12-17 23:24:38.704857', 'delta': '0:00:00.006574', 'changed': True, 'invocation': {'module_args': {'_raw_params': 'blockdev --getss /dev/mapper/vdo_nvme2n1 | grep -Po -q "4096" && echo true || echo false\n', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['false'], 'stderr_lines': ['blockdev: cannot open /dev/mapper/vdo_nvme2n1: No such file or directory'], 'failed': False, 'item': {'vgname': 'gluster_vg_nvme2n1', 'pvname': '/dev/mapper/vdo_nvme2n1'}, 'ansible_loop_var': 'item'}]) => {"ansible_loop_var": "item", "changed": false, "item": [{"ansible_loop_var": "item", "changed": true, "cmd": "blockdev --getss /dev
/mapper/vdo_nvme2n1 | grep -Po -q \"512\" && echo true || echo false\n", "delta": "0:00:00.006586", "end": "2020-12-17 23:24:23.488187", "failed": false, "invocation": {"module_args": {"_raw_params": "blockdev --getss /dev/mapper/vdo_nvme2n1 | grep -Po -q \"512\" && echo true || echo false\n", "_uses_shell": true, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "stdin_add_newline": true, "strip_empty_ends": true, "warn": true}}, "item": {"pvname": "/dev/mapper/vdo_nvme2n1", "vgname": "gluster_vg_nvme2n1"}, "rc": 0, "start": "2020-12-17 23:24:23.481601", "stderr": "blockdev: cannot open /dev/mapper/vdo_nvme2n1: No such file or directory", "stderr_lines": ["blockdev: cannot open /dev/mapper/vdo_nvme2n1: No such file or directory"], "stdout": "false", "stdout_lines": ["false"]}, {"ansible_loop_var": "item", "changed": true, "cmd": "blockdev --getss /dev/mapper/vdo_nvme2n1 | grep -Po -q \"4096\" && echo true || echo false\n", "delta":
"0:00:00.006574", "end": "2020-12-17 23:24:38.704857", "failed": false, "invocation": {"module_args": {"_raw_params": "blockdev --getss /dev/mapper/vdo_nvme2n1 | grep -Po -q \"4096\" && echo true || echo false\n", "_uses_shell": true, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "stdin_add_newline": true, "strip_empty_ends": true, "warn": true}}, "item": {"pvname": "/dev/mapper/vdo_nvme2n1", "vgname": "gluster_vg_nvme2n1"}, "rc": 0, "start": "2020-12-17 23:24:38.698283", "stderr": "blockdev: cannot open /dev/mapper/vdo_nvme2n1: No such file or directory", "stderr_lines": ["blockdev: cannot open /dev/mapper/vdo_nvme2n1: No such file or directory"], "stdout": "false", "stdout_lines": ["false"]}], "skip_reason": "Conditional result was False"}
skipping: [host2.fqdn.tld] => (item=[{'cmd': 'blockdev --getss /dev/mapper/vdo_nvme2n1 | grep -Po -q "512" && echo true || echo false\n', 'stdout': 'false', 'stderr': 'blockdev: cannot open /dev/mapper/vdo_nvme2n1: No such file or directory', 'rc': 0, 'start': '2020-12-17 23:24:23.481601', 'end': '2020-12-17 23:24:23.488187', 'delta': '0:00:00.006586', 'changed': True, 'invocation': {'module_args': {'_raw_params': 'blockdev --getss /dev/mapper/vdo_nvme2n1 | grep -Po -q "512" && echo true || echo false\n', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['false'], 'stderr_lines': ['blockdev: cannot open /dev/mapper/vdo_nvme2n1: No such file or directory'], 'failed': False, 'item': {'vgname': 'gluster_vg_nvme2n1', 'pvname': '/dev/mapper/vdo_nvme2n1'}, 'ansible_loop_var': 'item'}, {'cmd': 'blockdev --getss /dev/mapper/vdo_nvme1n1 |
grep -Po -q "4096" && echo true || echo false\n', 'stdout': 'false', 'stderr': 'blockdev: cannot open /dev/mapper/vdo_nvme1n1: No such file or directory', 'rc': 0, 'start': '2020-12-17 23:24:43.628009', 'end': '2020-12-17 23:24:43.634620', 'delta': '0:00:00.006611', 'changed': True, 'invocation': {'module_args': {'_raw_params': 'blockdev --getss /dev/mapper/vdo_nvme1n1 | grep -Po -q "4096" && echo true || echo false\n', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['false'], 'stderr_lines': ['blockdev: cannot open /dev/mapper/vdo_nvme1n1: No such file or directory'], 'failed': False, 'item': {'vgname': 'gluster_vg_nvme1n1', 'pvname': '/dev/mapper/vdo_nvme1n1'}, 'ansible_loop_var': 'item'}]) => {"ansible_loop_var": "item", "changed": false, "item": [{"ansible_loop_var": "item", "changed": true, "cmd": "blockdev --getss /dev
/mapper/vdo_nvme2n1 | grep -Po -q \"512\" && echo true || echo false\n", "delta": "0:00:00.006586", "end": "2020-12-17 23:24:23.488187", "failed": false, "invocation": {"module_args": {"_raw_params": "blockdev --getss /dev/mapper/vdo_nvme2n1 | grep -Po -q \"512\" && echo true || echo false\n", "_uses_shell": true, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "stdin_add_newline": true, "strip_empty_ends": true, "warn": true}}, "item": {"pvname": "/dev/mapper/vdo_nvme2n1", "vgname": "gluster_vg_nvme2n1"}, "rc": 0, "start": "2020-12-17 23:24:23.481601", "stderr": "blockdev: cannot open /dev/mapper/vdo_nvme2n1: No such file or directory", "stderr_lines": ["blockdev: cannot open /dev/mapper/vdo_nvme2n1: No such file or directory"], "stdout": "false", "stdout_lines": ["false"]}, {"ansible_loop_var": "item", "changed": true, "cmd": "blockdev --getss /dev/mapper/vdo_nvme1n1 | grep -Po -q \"4096\" && echo true || echo false\n", "delta":
"0:00:00.006611", "end": "2020-12-17 23:24:43.634620", "failed": false, "invocation": {"module_args": {"_raw_params": "blockdev --getss /dev/mapper/vdo_nvme1n1 | grep -Po -q \"4096\" && echo true || echo false\n", "_uses_shell": true, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "stdin_add_newline": true, "strip_empty_ends": true, "warn": true}}, "item": {"pvname": "/dev/mapper/vdo_nvme1n1", "vgname": "gluster_vg_nvme1n1"}, "rc": 0, "start": "2020-12-17 23:24:43.628009", "stderr": "blockdev: cannot open /dev/mapper/vdo_nvme1n1: No such file or directory", "stderr_lines": ["blockdev: cannot open /dev/mapper/vdo_nvme1n1: No such file or directory"], "stdout": "false", "stdout_lines": ["false"]}], "skip_reason": "Conditional result was False"}
skipping: [host3.fqdn.tld] => (item=[{'cmd': 'blockdev --getss /dev/mapper/vdo_nvme0n1 | grep -Po -q "512" && echo true || echo false\n', 'stdout': 'false', 'stderr': 'blockdev: cannot open /dev/mapper/vdo_nvme0n1: No such file or directory', 'rc': 0, 'start': '2020-12-17 23:24:18.602391', 'end': '2020-12-17 23:24:18.608937', 'delta': '0:00:00.006546', 'changed': True, 'invocation': {'module_args': {'_raw_params': 'blockdev --getss /dev/mapper/vdo_nvme0n1 | grep -Po -q "512" && echo true || echo false\n', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['false'], 'stderr_lines': ['blockdev: cannot open /dev/mapper/vdo_nvme0n1: No such file or directory'], 'failed': False, 'item': {'vgname': 'gluster_vg_nvme0n1', 'pvname': '/dev/mapper/vdo_nvme0n1'}, 'ansible_loop_var': 'item'}, {'cmd': 'blockdev --getss /dev/mapper/vdo_nvme0n1 |
grep -Po -q "4096" && echo true || echo false\n', 'stdout': 'false', 'stderr': 'blockdev: cannot open /dev/mapper/vdo_nvme0n1: No such file or directory', 'rc': 0, 'start': '2020-12-17 23:24:33.791893', 'end': '2020-12-17 23:24:33.798476', 'delta': '0:00:00.006583', 'changed': True, 'invocation': {'module_args': {'_raw_params': 'blockdev --getss /dev/mapper/vdo_nvme0n1 | grep -Po -q "4096" && echo true || echo false\n', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['false'], 'stderr_lines': ['blockdev: cannot open /dev/mapper/vdo_nvme0n1: No such file or directory'], 'failed': False, 'item': {'vgname': 'gluster_vg_nvme0n1', 'pvname': '/dev/mapper/vdo_nvme0n1'}, 'ansible_loop_var': 'item'}]) => {"ansible_loop_var": "item", "changed": false, "item": [{"ansible_loop_var": "item", "changed": true, "cmd": "blockdev --getss /dev
/mapper/vdo_nvme0n1 | grep -Po -q \"512\" && echo true || echo false\n", "delta": "0:00:00.006546", "end": "2020-12-17 23:24:18.608937", "failed": false, "invocation": {"module_args": {"_raw_params": "blockdev --getss /dev/mapper/vdo_nvme0n1 | grep -Po -q \"512\" && echo true || echo false\n", "_uses_shell": true, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "stdin_add_newline": true, "strip_empty_ends": true, "warn": true}}, "item": {"pvname": "/dev/mapper/vdo_nvme0n1", "vgname": "gluster_vg_nvme0n1"}, "rc": 0, "start": "2020-12-17 23:24:18.602391", "stderr": "blockdev: cannot open /dev/mapper/vdo_nvme0n1: No such file or directory", "stderr_lines": ["blockdev: cannot open /dev/mapper/vdo_nvme0n1: No such file or directory"], "stdout": "false", "stdout_lines": ["false"]}, {"ansible_loop_var": "item", "changed": true, "cmd": "blockdev --getss /dev/mapper/vdo_nvme0n1 | grep -Po -q \"4096\" && echo true || echo false\n", "delta":
"0:00:00.006583", "end": "2020-12-17 23:24:33.798476", "failed": false, "invocation": {"module_args": {"_raw_params": "blockdev --getss /dev/mapper/vdo_nvme0n1 | grep -Po -q \"4096\" && echo true || echo false\n", "_uses_shell": true, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "stdin_add_newline": true, "strip_empty_ends": true, "warn": true}}, "item": {"pvname": "/dev/mapper/vdo_nvme0n1", "vgname": "gluster_vg_nvme0n1"}, "rc": 0, "start": "2020-12-17 23:24:33.791893", "stderr": "blockdev: cannot open /dev/mapper/vdo_nvme0n1: No such file or directory", "stderr_lines": ["blockdev: cannot open /dev/mapper/vdo_nvme0n1: No such file or directory"], "stdout": "false", "stdout_lines": ["false"]}], "skip_reason": "Conditional result was False"}
skipping: [host2.fqdn.tld] => (item=[{'cmd': 'blockdev --getss /dev/mapper/vdo_nvme1n1 | grep -Po -q "512" && echo true || echo false\n', 'stdout': 'false', 'stderr': 'blockdev: cannot open /dev/mapper/vdo_nvme1n1: No such file or directory', 'rc': 0, 'start': '2020-12-17 23:24:28.416720', 'end': '2020-12-17 23:24:28.423302', 'delta': '0:00:00.006582', 'changed': True, 'invocation': {'module_args': {'_raw_params': 'blockdev --getss /dev/mapper/vdo_nvme1n1 | grep -Po -q "512" && echo true || echo false\n', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['false'], 'stderr_lines': ['blockdev: cannot open /dev/mapper/vdo_nvme1n1: No such file or directory'], 'failed': False, 'item': {'vgname': 'gluster_vg_nvme1n1', 'pvname': '/dev/mapper/vdo_nvme1n1'}, 'ansible_loop_var': 'item'}, {'cmd': 'blockdev --getss /dev/mapper/vdo_nvme0n1 |
grep -Po -q "4096" && echo true || echo false\n', 'stdout': 'false', 'stderr': 'blockdev: cannot open /dev/mapper/vdo_nvme0n1: No such file or directory', 'rc': 0, 'start': '2020-12-17 23:24:33.735660', 'end': '2020-12-17 23:24:33.742286', 'delta': '0:00:00.006626', 'changed': True, 'invocation': {'module_args': {'_raw_params': 'blockdev --getss /dev/mapper/vdo_nvme0n1 | grep -Po -q "4096" && echo true || echo false\n', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['false'], 'stderr_lines': ['blockdev: cannot open /dev/mapper/vdo_nvme0n1: No such file or directory'], 'failed': False, 'item': {'vgname': 'gluster_vg_nvme0n1', 'pvname': '/dev/mapper/vdo_nvme0n1'}, 'ansible_loop_var': 'item'}]) => {"ansible_loop_var": "item", "changed": false, "item": [{"ansible_loop_var": "item", "changed": true, "cmd": "blockdev --getss /dev
/mapper/vdo_nvme1n1 | grep -Po -q \"512\" && echo true || echo false\n", "delta": "0:00:00.006582", "end": "2020-12-17 23:24:28.423302", "failed": false, "invocation": {"module_args": {"_raw_params": "blockdev --getss /dev/mapper/vdo_nvme1n1 | grep -Po -q \"512\" && echo true || echo false\n", "_uses_shell": true, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "stdin_add_newline": true, "strip_empty_ends": true, "warn": true}}, "item": {"pvname": "/dev/mapper/vdo_nvme1n1", "vgname": "gluster_vg_nvme1n1"}, "rc": 0, "start": "2020-12-17 23:24:28.416720", "stderr": "blockdev: cannot open /dev/mapper/vdo_nvme1n1: No such file or directory", "stderr_lines": ["blockdev: cannot open /dev/mapper/vdo_nvme1n1: No such file or directory"], "stdout": "false", "stdout_lines": ["false"]}, {"ansible_loop_var": "item", "changed": true, "cmd": "blockdev --getss /dev/mapper/vdo_nvme0n1 | grep -Po -q \"4096\" && echo true || echo false\n", "delta":
"0:00:00.006626", "end": "2020-12-17 23:24:33.742286", "failed": false, "invocation": {"module_args": {"_raw_params": "blockdev --getss /dev/mapper/vdo_nvme0n1 | grep -Po -q \"4096\" && echo true || echo false\n", "_uses_shell": true, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "stdin_add_newline": true, "strip_empty_ends": true, "warn": true}}, "item": {"pvname": "/dev/mapper/vdo_nvme0n1", "vgname": "gluster_vg_nvme0n1"}, "rc": 0, "start": "2020-12-17 23:24:33.735660", "stderr": "blockdev: cannot open /dev/mapper/vdo_nvme0n1: No such file or directory", "stderr_lines": ["blockdev: cannot open /dev/mapper/vdo_nvme0n1: No such file or directory"], "stdout": "false", "stdout_lines": ["false"]}], "skip_reason": "Conditional result was False"}
skipping: [host3.fqdn.tld] => (item=[{'cmd': 'blockdev --getss /dev/mapper/vdo_nvme0n1 | grep -Po -q "512" && echo true || echo false\n', 'stdout': 'false', 'stderr': 'blockdev: cannot open /dev/mapper/vdo_nvme0n1: No such file or directory', 'rc': 0, 'start': '2020-12-17 23:24:18.602391', 'end': '2020-12-17 23:24:18.608937', 'delta': '0:00:00.006546', 'changed': True, 'invocation': {'module_args': {'_raw_params': 'blockdev --getss /dev/mapper/vdo_nvme0n1 | grep -Po -q "512" && echo true || echo false\n', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['false'], 'stderr_lines': ['blockdev: cannot open /dev/mapper/vdo_nvme0n1: No such file or directory'], 'failed': False, 'item': {'vgname': 'gluster_vg_nvme0n1', 'pvname': '/dev/mapper/vdo_nvme0n1'}, 'ansible_loop_var': 'item'}, {'cmd': 'blockdev --getss /dev/mapper/vdo_nvme2n1 |
grep -Po -q "4096" && echo true || echo false\n', 'stdout': 'false', 'stderr': 'blockdev: cannot open /dev/mapper/vdo_nvme2n1: No such file or directory', 'rc': 0, 'start': '2020-12-17 23:24:38.755538', 'end': '2020-12-17 23:24:38.762054', 'delta': '0:00:00.006516', 'changed': True, 'invocation': {'module_args': {'_raw_params': 'blockdev --getss /dev/mapper/vdo_nvme2n1 | grep -Po -q "4096" && echo true || echo false\n', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['false'], 'stderr_lines': ['blockdev: cannot open /dev/mapper/vdo_nvme2n1: No such file or directory'], 'failed': False, 'item': {'vgname': 'gluster_vg_nvme2n1', 'pvname': '/dev/mapper/vdo_nvme2n1'}, 'ansible_loop_var': 'item'}]) => {"ansible_loop_var": "item", "changed": false, "item": [{"ansible_loop_var": "item", "changed": true, "cmd": "blockdev --getss /dev
/mapper/vdo_nvme0n1 | grep -Po -q \"512\" && echo true || echo false\n", "delta": "0:00:00.006546", "end": "2020-12-17 23:24:18.608937", "failed": false, "invocation": {"module_args": {"_raw_params": "blockdev --getss /dev/mapper/vdo_nvme0n1 | grep -Po -q \"512\" && echo true || echo false\n", "_uses_shell": true, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "stdin_add_newline": true, "strip_empty_ends": true, "warn": true}}, "item": {"pvname": "/dev/mapper/vdo_nvme0n1", "vgname": "gluster_vg_nvme0n1"}, "rc": 0, "start": "2020-12-17 23:24:18.602391", "stderr": "blockdev: cannot open /dev/mapper/vdo_nvme0n1: No such file or directory", "stderr_lines": ["blockdev: cannot open /dev/mapper/vdo_nvme0n1: No such file or directory"], "stdout": "false", "stdout_lines": ["false"]}, {"ansible_loop_var": "item", "changed": true, "cmd": "blockdev --getss /dev/mapper/vdo_nvme2n1 | grep -Po -q \"4096\" && echo true || echo false\n", "delta":
"0:00:00.006516", "end": "2020-12-17 23:24:38.762054", "failed": false, "invocation": {"module_args": {"_raw_params": "blockdev --getss /dev/mapper/vdo_nvme2n1 | grep -Po -q \"4096\" && echo true || echo false\n", "_uses_shell": true, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "stdin_add_newline": true, "strip_empty_ends": true, "warn": true}}, "item": {"pvname": "/dev/mapper/vdo_nvme2n1", "vgname": "gluster_vg_nvme2n1"}, "rc": 0, "start": "2020-12-17 23:24:38.755538", "stderr": "blockdev: cannot open /dev/mapper/vdo_nvme2n1: No such file or directory", "stderr_lines": ["blockdev: cannot open /dev/mapper/vdo_nvme2n1: No such file or directory"], "stdout": "false", "stdout_lines": ["false"]}], "skip_reason": "Conditional result was False"}
skipping: [host2.fqdn.tld] => (item=[{'cmd': 'blockdev --getss /dev/mapper/vdo_nvme1n1 | grep -Po -q "512" && echo true || echo false\n', 'stdout': 'false', 'stderr': 'blockdev: cannot open /dev/mapper/vdo_nvme1n1: No such file or directory', 'rc': 0, 'start': '2020-12-17 23:24:28.416720', 'end': '2020-12-17 23:24:28.423302', 'delta': '0:00:00.006582', 'changed': True, 'invocation': {'module_args': {'_raw_params': 'blockdev --getss /dev/mapper/vdo_nvme1n1 | grep -Po -q "512" && echo true || echo false\n', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['false'], 'stderr_lines': ['blockdev: cannot open /dev/mapper/vdo_nvme1n1: No such file or directory'], 'failed': False, 'item': {'vgname': 'gluster_vg_nvme1n1', 'pvname': '/dev/mapper/vdo_nvme1n1'}, 'ansible_loop_var': 'item'}, {'cmd': 'blockdev --getss /dev/mapper/vdo_nvme2n1 |
grep -Po -q "4096" && echo true || echo false\n', 'stdout': 'false', 'stderr': 'blockdev: cannot open /dev/mapper/vdo_nvme2n1: No such file or directory', 'rc': 0, 'start': '2020-12-17 23:24:38.698283', 'end': '2020-12-17 23:24:38.704857', 'delta': '0:00:00.006574', 'changed': True, 'invocation': {'module_args': {'_raw_params': 'blockdev --getss /dev/mapper/vdo_nvme2n1 | grep -Po -q "4096" && echo true || echo false\n', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['false'], 'stderr_lines': ['blockdev: cannot open /dev/mapper/vdo_nvme2n1: No such file or directory'], 'failed': False, 'item': {'vgname': 'gluster_vg_nvme2n1', 'pvname': '/dev/mapper/vdo_nvme2n1'}, 'ansible_loop_var': 'item'}]) => {"ansible_loop_var": "item", "changed": false, "item": [{"ansible_loop_var": "item", "changed": true, "cmd": "blockdev --getss /dev
/mapper/vdo_nvme1n1 | grep -Po -q \"512\" && echo true || echo false\n", "delta": "0:00:00.006582", "end": "2020-12-17 23:24:28.423302", "failed": false, "invocation": {"module_args": {"_raw_params": "blockdev --getss /dev/mapper/vdo_nvme1n1 | grep -Po -q \"512\" && echo true || echo false\n", "_uses_shell": true, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "stdin_add_newline": true, "strip_empty_ends": true, "warn": true}}, "item": {"pvname": "/dev/mapper/vdo_nvme1n1", "vgname": "gluster_vg_nvme1n1"}, "rc": 0, "start": "2020-12-17 23:24:28.416720", "stderr": "blockdev: cannot open /dev/mapper/vdo_nvme1n1: No such file or directory", "stderr_lines": ["blockdev: cannot open /dev/mapper/vdo_nvme1n1: No such file or directory"], "stdout": "false", "stdout_lines": ["false"]}, {"ansible_loop_var": "item", "changed": true, "cmd": "blockdev --getss /dev/mapper/vdo_nvme2n1 | grep -Po -q \"4096\" && echo true || echo false\n", "delta":
"0:00:00.006574", "end": "2020-12-17 23:24:38.704857", "failed": false, "invocation": {"module_args": {"_raw_params": "blockdev --getss /dev/mapper/vdo_nvme2n1 | grep -Po -q \"4096\" && echo true || echo false\n", "_uses_shell": true, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "stdin_add_newline": true, "strip_empty_ends": true, "warn": true}}, "item": {"pvname": "/dev/mapper/vdo_nvme2n1", "vgname": "gluster_vg_nvme2n1"}, "rc": 0, "start": "2020-12-17 23:24:38.698283", "stderr": "blockdev: cannot open /dev/mapper/vdo_nvme2n1: No such file or directory", "stderr_lines": ["blockdev: cannot open /dev/mapper/vdo_nvme2n1: No such file or directory"], "stdout": "false", "stdout_lines": ["false"]}], "skip_reason": "Conditional result was False"}
skipping: [host3.fqdn.tld] => (item=[{'cmd': 'blockdev --getss /dev/mapper/vdo_nvme0n1 | grep -Po -q "512" && echo true || echo false\n', 'stdout': 'false', 'stderr': 'blockdev: cannot open /dev/mapper/vdo_nvme0n1: No such file or directory', 'rc': 0, 'start': '2020-12-17 23:24:18.602391', 'end': '2020-12-17 23:24:18.608937', 'delta': '0:00:00.006546', 'changed': True, 'invocation': {'module_args': {'_raw_params': 'blockdev --getss /dev/mapper/vdo_nvme0n1 | grep -Po -q "512" && echo true || echo false\n', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['false'], 'stderr_lines': ['blockdev: cannot open /dev/mapper/vdo_nvme0n1: No such file or directory'], 'failed': False, 'item': {'vgname': 'gluster_vg_nvme0n1', 'pvname': '/dev/mapper/vdo_nvme0n1'}, 'ansible_loop_var': 'item'}, {'cmd': 'blockdev --getss /dev/mapper/vdo_nvme1n1 |
grep -Po -q "4096" && echo true || echo false\n', 'stdout': 'false', 'stderr': 'blockdev: cannot open /dev/mapper/vdo_nvme1n1: No such file or directory', 'rc': 0, 'start': '2020-12-17 23:24:43.751742', 'end': '2020-12-17 23:24:43.758283', 'delta': '0:00:00.006541', 'changed': True, 'invocation': {'module_args': {'_raw_params': 'blockdev --getss /dev/mapper/vdo_nvme1n1 | grep -Po -q "4096" && echo true || echo false\n', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['false'], 'stderr_lines': ['blockdev: cannot open /dev/mapper/vdo_nvme1n1: No such file or directory'], 'failed': False, 'item': {'vgname': 'gluster_vg_nvme1n1', 'pvname': '/dev/mapper/vdo_nvme1n1'}, 'ansible_loop_var': 'item'}]) => {"ansible_loop_var": "item", "changed": false, "item": [{"ansible_loop_var": "item", "changed": true, "cmd": "blockdev --getss /dev
/mapper/vdo_nvme0n1 | grep -Po -q \"512\" && echo true || echo false\n", "delta": "0:00:00.006546", "end": "2020-12-17 23:24:18.608937", "failed": false, "invocation": {"module_args": {"_raw_params": "blockdev --getss /dev/mapper/vdo_nvme0n1 | grep -Po -q \"512\" && echo true || echo false\n", "_uses_shell": true, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "stdin_add_newline": true, "strip_empty_ends": true, "warn": true}}, "item": {"pvname": "/dev/mapper/vdo_nvme0n1", "vgname": "gluster_vg_nvme0n1"}, "rc": 0, "start": "2020-12-17 23:24:18.602391", "stderr": "blockdev: cannot open /dev/mapper/vdo_nvme0n1: No such file or directory", "stderr_lines": ["blockdev: cannot open /dev/mapper/vdo_nvme0n1: No such file or directory"], "stdout": "false", "stdout_lines": ["false"]}, {"ansible_loop_var": "item", "changed": true, "cmd": "blockdev --getss /dev/mapper/vdo_nvme1n1 | grep -Po -q \"4096\" && echo true || echo false\n", "delta":
"0:00:00.006541", "end": "2020-12-17 23:24:43.758283", "failed": false, "invocation": {"module_args": {"_raw_params": "blockdev --getss /dev/mapper/vdo_nvme1n1 | grep -Po -q \"4096\" && echo true || echo false\n", "_uses_shell": true, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "stdin_add_newline": true, "strip_empty_ends": true, "warn": true}}, "item": {"pvname": "/dev/mapper/vdo_nvme1n1", "vgname": "gluster_vg_nvme1n1"}, "rc": 0, "start": "2020-12-17 23:24:43.751742", "stderr": "blockdev: cannot open /dev/mapper/vdo_nvme1n1: No such file or directory", "stderr_lines": ["blockdev: cannot open /dev/mapper/vdo_nvme1n1: No such file or directory"], "stdout": "false", "stdout_lines": ["false"]}], "skip_reason": "Conditional result was False"}
skipping: [host2.fqdn.tld] => (item=[{'cmd': 'blockdev --getss /dev/mapper/vdo_nvme1n1 | grep -Po -q "512" && echo true || echo false\n', 'stdout': 'false', 'stderr': 'blockdev: cannot open /dev/mapper/vdo_nvme1n1: No such file or directory', 'rc': 0, 'start': '2020-12-17 23:24:28.416720', 'end': '2020-12-17 23:24:28.423302', 'delta': '0:00:00.006582', 'changed': True, 'invocation': {'module_args': {'_raw_params': 'blockdev --getss /dev/mapper/vdo_nvme1n1 | grep -Po -q "512" && echo true || echo false\n', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['false'], 'stderr_lines': ['blockdev: cannot open /dev/mapper/vdo_nvme1n1: No such file or directory'], 'failed': False, 'item': {'vgname': 'gluster_vg_nvme1n1', 'pvname': '/dev/mapper/vdo_nvme1n1'}, 'ansible_loop_var': 'item'}, {'cmd': 'blockdev --getss /dev/mapper/vdo_nvme1n1 |
grep -Po -q "4096" && echo true || echo false\n', 'stdout': 'false', 'stderr': 'blockdev: cannot open /dev/mapper/vdo_nvme1n1: No such file or directory', 'rc': 0, 'start': '2020-12-17 23:24:43.628009', 'end': '2020-12-17 23:24:43.634620', 'delta': '0:00:00.006611', 'changed': True, 'invocation': {'module_args': {'_raw_params': 'blockdev --getss /dev/mapper/vdo_nvme1n1 | grep -Po -q "4096" && echo true || echo false\n', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['false'], 'stderr_lines': ['blockdev: cannot open /dev/mapper/vdo_nvme1n1: No such file or directory'], 'failed': False, 'item': {'vgname': 'gluster_vg_nvme1n1', 'pvname': '/dev/mapper/vdo_nvme1n1'}, 'ansible_loop_var': 'item'}]) => {"ansible_loop_var": "item", "changed": false, "item": [{"ansible_loop_var": "item", "changed": true, "cmd": "blockdev --getss /dev
/mapper/vdo_nvme1n1 | grep -Po -q \"512\" && echo true || echo false\n", "delta": "0:00:00.006582", "end": "2020-12-17 23:24:28.423302", "failed": false, "invocation": {"module_args": {"_raw_params": "blockdev --getss /dev/mapper/vdo_nvme1n1 | grep -Po -q \"512\" && echo true || echo false\n", "_uses_shell": true, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "stdin_add_newline": true, "strip_empty_ends": true, "warn": true}}, "item": {"pvname": "/dev/mapper/vdo_nvme1n1", "vgname": "gluster_vg_nvme1n1"}, "rc": 0, "start": "2020-12-17 23:24:28.416720", "stderr": "blockdev: cannot open /dev/mapper/vdo_nvme1n1: No such file or directory", "stderr_lines": ["blockdev: cannot open /dev/mapper/vdo_nvme1n1: No such file or directory"], "stdout": "false", "stdout_lines": ["false"]}, {"ansible_loop_var": "item", "changed": true, "cmd": "blockdev --getss /dev/mapper/vdo_nvme1n1 | grep -Po -q \"4096\" && echo true || echo false\n", "delta":
"0:00:00.006611", "end": "2020-12-17 23:24:43.634620", "failed": false, "invocation": {"module_args": {"_raw_params": "blockdev --getss /dev/mapper/vdo_nvme1n1 | grep -Po -q \"4096\" && echo true || echo false\n", "_uses_shell": true, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "stdin_add_newline": true, "strip_empty_ends": true, "warn": true}}, "item": {"pvname": "/dev/mapper/vdo_nvme1n1", "vgname": "gluster_vg_nvme1n1"}, "rc": 0, "start": "2020-12-17 23:24:43.628009", "stderr": "blockdev: cannot open /dev/mapper/vdo_nvme1n1: No such file or directory", "stderr_lines": ["blockdev: cannot open /dev/mapper/vdo_nvme1n1: No such file or directory"], "stdout": "false", "stdout_lines": ["false"]}], "skip_reason": "Conditional result was False"}
skipping: [host3.fqdn.tld] => (item=[{'cmd': 'blockdev --getss /dev/mapper/vdo_nvme2n1 | grep -Po -q "512" && echo true || echo false\n', 'stdout': 'false', 'stderr': 'blockdev: cannot open /dev/mapper/vdo_nvme2n1: No such file or directory', 'rc': 0, 'start': '2020-12-17 23:24:23.584356', 'end': '2020-12-17 23:24:23.590860', 'delta': '0:00:00.006504', 'changed': True, 'invocation': {'module_args': {'_raw_params': 'blockdev --getss /dev/mapper/vdo_nvme2n1 | grep -Po -q "512" && echo true || echo false\n', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['false'], 'stderr_lines': ['blockdev: cannot open /dev/mapper/vdo_nvme2n1: No such file or directory'], 'failed': False, 'item': {'vgname': 'gluster_vg_nvme2n1', 'pvname': '/dev/mapper/vdo_nvme2n1'}, 'ansible_loop_var': 'item'}, {'cmd': 'blockdev --getss /dev/mapper/vdo_nvme0n1 |
grep -Po -q "4096" && echo true || echo false\n', 'stdout': 'false', 'stderr': 'blockdev: cannot open /dev/mapper/vdo_nvme0n1: No such file or directory', 'rc': 0, 'start': '2020-12-17 23:24:33.791893', 'end': '2020-12-17 23:24:33.798476', 'delta': '0:00:00.006583', 'changed': True, 'invocation': {'module_args': {'_raw_params': 'blockdev --getss /dev/mapper/vdo_nvme0n1 | grep -Po -q "4096" && echo true || echo false\n', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['false'], 'stderr_lines': ['blockdev: cannot open /dev/mapper/vdo_nvme0n1: No such file or directory'], 'failed': False, 'item': {'vgname': 'gluster_vg_nvme0n1', 'pvname': '/dev/mapper/vdo_nvme0n1'}, 'ansible_loop_var': 'item'}]) => {"ansible_loop_var": "item", "changed": false, "item": [{"ansible_loop_var": "item", "changed": true, "cmd": "blockdev --getss /dev
/mapper/vdo_nvme2n1 | grep -Po -q \"512\" && echo true || echo false\n", "delta": "0:00:00.006504", "end": "2020-12-17 23:24:23.590860", "failed": false, "invocation": {"module_args": {"_raw_params": "blockdev --getss /dev/mapper/vdo_nvme2n1 | grep -Po -q \"512\" && echo true || echo false\n", "_uses_shell": true, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "stdin_add_newline": true, "strip_empty_ends": true, "warn": true}}, "item": {"pvname": "/dev/mapper/vdo_nvme2n1", "vgname": "gluster_vg_nvme2n1"}, "rc": 0, "start": "2020-12-17 23:24:23.584356", "stderr": "blockdev: cannot open /dev/mapper/vdo_nvme2n1: No such file or directory", "stderr_lines": ["blockdev: cannot open /dev/mapper/vdo_nvme2n1: No such file or directory"], "stdout": "false", "stdout_lines": ["false"]}, {"ansible_loop_var": "item", "changed": true, "cmd": "blockdev --getss /dev/mapper/vdo_nvme0n1 | grep -Po -q \"4096\" && echo true || echo false\n", "delta":
"0:00:00.006583", "end": "2020-12-17 23:24:33.798476", "failed": false, "invocation": {"module_args": {"_raw_params": "blockdev --getss /dev/mapper/vdo_nvme0n1 | grep -Po -q \"4096\" && echo true || echo false\n", "_uses_shell": true, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "stdin_add_newline": true, "strip_empty_ends": true, "warn": true}}, "item": {"pvname": "/dev/mapper/vdo_nvme0n1", "vgname": "gluster_vg_nvme0n1"}, "rc": 0, "start": "2020-12-17 23:24:33.791893", "stderr": "blockdev: cannot open /dev/mapper/vdo_nvme0n1: No such file or directory", "stderr_lines": ["blockdev: cannot open /dev/mapper/vdo_nvme0n1: No such file or directory"], "stdout": "false", "stdout_lines": ["false"]}], "skip_reason": "Conditional result was False"}
skipping: [host3.fqdn.tld] => (item=[{'cmd': 'blockdev --getss /dev/mapper/vdo_nvme2n1 | grep -Po -q "512" && echo true || echo false\n', 'stdout': 'false', 'stderr': 'blockdev: cannot open /dev/mapper/vdo_nvme2n1: No such file or directory', 'rc': 0, 'start': '2020-12-17 23:24:23.584356', 'end': '2020-12-17 23:24:23.590860', 'delta': '0:00:00.006504', 'changed': True, 'invocation': {'module_args': {'_raw_params': 'blockdev --getss /dev/mapper/vdo_nvme2n1 | grep -Po -q "512" && echo true || echo false\n', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['false'], 'stderr_lines': ['blockdev: cannot open /dev/mapper/vdo_nvme2n1: No such file or directory'], 'failed': False, 'item': {'vgname': 'gluster_vg_nvme2n1', 'pvname': '/dev/mapper/vdo_nvme2n1'}, 'ansible_loop_var': 'item'}, {'cmd': 'blockdev --getss /dev/mapper/vdo_nvme2n1 |
grep -Po -q "4096" && echo true || echo false\n', 'stdout': 'false', 'stderr': 'blockdev: cannot open /dev/mapper/vdo_nvme2n1: No such file or directory', 'rc': 0, 'start': '2020-12-17 23:24:38.755538', 'end': '2020-12-17 23:24:38.762054', 'delta': '0:00:00.006516', 'changed': True, 'invocation': {'module_args': {'_raw_params': 'blockdev --getss /dev/mapper/vdo_nvme2n1 | grep -Po -q "4096" && echo true || echo false\n', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['false'], 'stderr_lines': ['blockdev: cannot open /dev/mapper/vdo_nvme2n1: No such file or directory'], 'failed': False, 'item': {'vgname': 'gluster_vg_nvme2n1', 'pvname': '/dev/mapper/vdo_nvme2n1'}, 'ansible_loop_var': 'item'}]) => {"ansible_loop_var": "item", "changed": false, "item": [{"ansible_loop_var": "item", "changed": true, "cmd": "blockdev --getss /dev
/mapper/vdo_nvme2n1 | grep -Po -q \"512\" && echo true || echo false\n", "delta": "0:00:00.006504", "end": "2020-12-17 23:24:23.590860", "failed": false, "invocation": {"module_args": {"_raw_params": "blockdev --getss /dev/mapper/vdo_nvme2n1 | grep -Po -q \"512\" && echo true || echo false\n", "_uses_shell": true, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "stdin_add_newline": true, "strip_empty_ends": true, "warn": true}}, "item": {"pvname": "/dev/mapper/vdo_nvme2n1", "vgname": "gluster_vg_nvme2n1"}, "rc": 0, "start": "2020-12-17 23:24:23.584356", "stderr": "blockdev: cannot open /dev/mapper/vdo_nvme2n1: No such file or directory", "stderr_lines": ["blockdev: cannot open /dev/mapper/vdo_nvme2n1: No such file or directory"], "stdout": "false", "stdout_lines": ["false"]}, {"ansible_loop_var": "item", "changed": true, "cmd": "blockdev --getss /dev/mapper/vdo_nvme2n1 | grep -Po -q \"4096\" && echo true || echo false\n", "delta":
"0:00:00.006516", "end": "2020-12-17 23:24:38.762054", "failed": false, "invocation": {"module_args": {"_raw_params": "blockdev --getss /dev/mapper/vdo_nvme2n1 | grep -Po -q \"4096\" && echo true || echo false\n", "_uses_shell": true, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "stdin_add_newline": true, "strip_empty_ends": true, "warn": true}}, "item": {"pvname": "/dev/mapper/vdo_nvme2n1", "vgname": "gluster_vg_nvme2n1"}, "rc": 0, "start": "2020-12-17 23:24:38.755538", "stderr": "blockdev: cannot open /dev/mapper/vdo_nvme2n1: No such file or directory", "stderr_lines": ["blockdev: cannot open /dev/mapper/vdo_nvme2n1: No such file or directory"], "stdout": "false", "stdout_lines": ["false"]}], "skip_reason": "Conditional result was False"}
skipping: [host3.fqdn.tld] => (item=[{'cmd': 'blockdev --getss /dev/mapper/vdo_nvme2n1 | grep -Po -q "512" && echo true || echo false\n', 'stdout': 'false', 'stderr': 'blockdev: cannot open /dev/mapper/vdo_nvme2n1: No such file or directory', 'rc': 0, 'start': '2020-12-17 23:24:23.584356', 'end': '2020-12-17 23:24:23.590860', 'delta': '0:00:00.006504', 'changed': True, 'invocation': {'module_args': {'_raw_params': 'blockdev --getss /dev/mapper/vdo_nvme2n1 | grep -Po -q "512" && echo true || echo false\n', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['false'], 'stderr_lines': ['blockdev: cannot open /dev/mapper/vdo_nvme2n1: No such file or directory'], 'failed': False, 'item': {'vgname': 'gluster_vg_nvme2n1', 'pvname': '/dev/mapper/vdo_nvme2n1'}, 'ansible_loop_var': 'item'}, {'cmd': 'blockdev --getss /dev/mapper/vdo_nvme1n1 |
grep -Po -q "4096" && echo true || echo false\n', 'stdout': 'false', 'stderr': 'blockdev: cannot open /dev/mapper/vdo_nvme1n1: No such file or directory', 'rc': 0, 'start': '2020-12-17 23:24:43.751742', 'end': '2020-12-17 23:24:43.758283', 'delta': '0:00:00.006541', 'changed': True, 'invocation': {'module_args': {'_raw_params': 'blockdev --getss /dev/mapper/vdo_nvme1n1 | grep -Po -q "4096" && echo true || echo false\n', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['false'], 'stderr_lines': ['blockdev: cannot open /dev/mapper/vdo_nvme1n1: No such file or directory'], 'failed': False, 'item': {'vgname': 'gluster_vg_nvme1n1', 'pvname': '/dev/mapper/vdo_nvme1n1'}, 'ansible_loop_var': 'item'}]) => {"ansible_loop_var": "item", "changed": false, "item": [{"ansible_loop_var": "item", "changed": true, "cmd": "blockdev --getss /dev
/mapper/vdo_nvme2n1 | grep -Po -q \"512\" && echo true || echo false\n", "delta": "0:00:00.006504", "end": "2020-12-17 23:24:23.590860", "failed": false, "invocation": {"module_args": {"_raw_params": "blockdev --getss /dev/mapper/vdo_nvme2n1 | grep -Po -q \"512\" && echo true || echo false\n", "_uses_shell": true, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "stdin_add_newline": true, "strip_empty_ends": true, "warn": true}}, "item": {"pvname": "/dev/mapper/vdo_nvme2n1", "vgname": "gluster_vg_nvme2n1"}, "rc": 0, "start": "2020-12-17 23:24:23.584356", "stderr": "blockdev: cannot open /dev/mapper/vdo_nvme2n1: No such file or directory", "stderr_lines": ["blockdev: cannot open /dev/mapper/vdo_nvme2n1: No such file or directory"], "stdout": "false", "stdout_lines": ["false"]}, {"ansible_loop_var": "item", "changed": true, "cmd": "blockdev --getss /dev/mapper/vdo_nvme1n1 | grep -Po -q \"4096\" && echo true || echo false\n", "delta":
"0:00:00.006541", "end": "2020-12-17 23:24:43.758283", "failed": false, "invocation": {"module_args": {"_raw_params": "blockdev --getss /dev/mapper/vdo_nvme1n1 | grep -Po -q \"4096\" && echo true || echo false\n", "_uses_shell": true, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "stdin_add_newline": true, "strip_empty_ends": true, "warn": true}}, "item": {"pvname": "/dev/mapper/vdo_nvme1n1", "vgname": "gluster_vg_nvme1n1"}, "rc": 0, "start": "2020-12-17 23:24:43.751742", "stderr": "blockdev: cannot open /dev/mapper/vdo_nvme1n1: No such file or directory", "stderr_lines": ["blockdev: cannot open /dev/mapper/vdo_nvme1n1: No such file or directory"], "stdout": "false", "stdout_lines": ["false"]}], "skip_reason": "Conditional result was False"}
skipping: [host3.fqdn.tld] => (item=[{'cmd': 'blockdev --getss /dev/mapper/vdo_nvme1n1 | grep -Po -q "512" && echo true || echo false\n', 'stdout': 'false', 'stderr': 'blockdev: cannot open /dev/mapper/vdo_nvme1n1: No such file or directory', 'rc': 0, 'start': '2020-12-17 23:24:28.544560', 'end': '2020-12-17 23:24:28.551125', 'delta': '0:00:00.006565', 'changed': True, 'invocation': {'module_args': {'_raw_params': 'blockdev --getss /dev/mapper/vdo_nvme1n1 | grep -Po -q "512" && echo true || echo false\n', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['false'], 'stderr_lines': ['blockdev: cannot open /dev/mapper/vdo_nvme1n1: No such file or directory'], 'failed': False, 'item': {'vgname': 'gluster_vg_nvme1n1', 'pvname': '/dev/mapper/vdo_nvme1n1'}, 'ansible_loop_var': 'item'}, {'cmd': 'blockdev --getss /dev/mapper/vdo_nvme0n1 |
grep -Po -q "4096" && echo true || echo false\n', 'stdout': 'false', 'stderr': 'blockdev: cannot open /dev/mapper/vdo_nvme0n1: No such file or directory', 'rc': 0, 'start': '2020-12-17 23:24:33.791893', 'end': '2020-12-17 23:24:33.798476', 'delta': '0:00:00.006583', 'changed': True, 'invocation': {'module_args': {'_raw_params': 'blockdev --getss /dev/mapper/vdo_nvme0n1 | grep -Po -q "4096" && echo true || echo false\n', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['false'], 'stderr_lines': ['blockdev: cannot open /dev/mapper/vdo_nvme0n1: No such file or directory'], 'failed': False, 'item': {'vgname': 'gluster_vg_nvme0n1', 'pvname': '/dev/mapper/vdo_nvme0n1'}, 'ansible_loop_var': 'item'}]) => {"ansible_loop_var": "item", "changed": false, "item": [{"ansible_loop_var": "item", "changed": true, "cmd": "blockdev --getss /dev
/mapper/vdo_nvme1n1 | grep -Po -q \"512\" && echo true || echo false\n", "delta": "0:00:00.006565", "end": "2020-12-17 23:24:28.551125", "failed": false, "invocation": {"module_args": {"_raw_params": "blockdev --getss /dev/mapper/vdo_nvme1n1 | grep -Po -q \"512\" && echo true || echo false\n", "_uses_shell": true, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "stdin_add_newline": true, "strip_empty_ends": true, "warn": true}}, "item": {"pvname": "/dev/mapper/vdo_nvme1n1", "vgname": "gluster_vg_nvme1n1"}, "rc": 0, "start": "2020-12-17 23:24:28.544560", "stderr": "blockdev: cannot open /dev/mapper/vdo_nvme1n1: No such file or directory", "stderr_lines": ["blockdev: cannot open /dev/mapper/vdo_nvme1n1: No such file or directory"], "stdout": "false", "stdout_lines": ["false"]}, {"ansible_loop_var": "item", "changed": true, "cmd": "blockdev --getss /dev/mapper/vdo_nvme0n1 | grep -Po -q \"4096\" && echo true || echo false\n", "delta":
"0:00:00.006583", "end": "2020-12-17 23:24:33.798476", "failed": false, "invocation": {"module_args": {"_raw_params": "blockdev --getss /dev/mapper/vdo_nvme0n1 | grep -Po -q \"4096\" && echo true || echo false\n", "_uses_shell": true, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "stdin_add_newline": true, "strip_empty_ends": true, "warn": true}}, "item": {"pvname": "/dev/mapper/vdo_nvme0n1", "vgname": "gluster_vg_nvme0n1"}, "rc": 0, "start": "2020-12-17 23:24:33.791893", "stderr": "blockdev: cannot open /dev/mapper/vdo_nvme0n1: No such file or directory", "stderr_lines": ["blockdev: cannot open /dev/mapper/vdo_nvme0n1: No such file or directory"], "stdout": "false", "stdout_lines": ["false"]}], "skip_reason": "Conditional result was False"}
skipping: [host3.fqdn.tld] => (item=[{'cmd': 'blockdev --getss /dev/mapper/vdo_nvme1n1 | grep -Po -q "512" && echo true || echo false\n', 'stdout': 'false', 'stderr': 'blockdev: cannot open /dev/mapper/vdo_nvme1n1: No such file or directory', 'rc': 0, 'start': '2020-12-17 23:24:28.544560', 'end': '2020-12-17 23:24:28.551125', 'delta': '0:00:00.006565', 'changed': True, 'invocation': {'module_args': {'_raw_params': 'blockdev --getss /dev/mapper/vdo_nvme1n1 | grep -Po -q "512" && echo true || echo false\n', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['false'], 'stderr_lines': ['blockdev: cannot open /dev/mapper/vdo_nvme1n1: No such file or directory'], 'failed': False, 'item': {'vgname': 'gluster_vg_nvme1n1', 'pvname': '/dev/mapper/vdo_nvme1n1'}, 'ansible_loop_var': 'item'}, {'cmd': 'blockdev --getss /dev/mapper/vdo_nvme2n1 |
grep -Po -q "4096" && echo true || echo false\n', 'stdout': 'false', 'stderr': 'blockdev: cannot open /dev/mapper/vdo_nvme2n1: No such file or directory', 'rc': 0, 'start': '2020-12-17 23:24:38.755538', 'end': '2020-12-17 23:24:38.762054', 'delta': '0:00:00.006516', 'changed': True, 'invocation': {'module_args': {'_raw_params': 'blockdev --getss /dev/mapper/vdo_nvme2n1 | grep -Po -q "4096" && echo true || echo false\n', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['false'], 'stderr_lines': ['blockdev: cannot open /dev/mapper/vdo_nvme2n1: No such file or directory'], 'failed': False, 'item': {'vgname': 'gluster_vg_nvme2n1', 'pvname': '/dev/mapper/vdo_nvme2n1'}, 'ansible_loop_var': 'item'}]) => {"ansible_loop_var": "item", "changed": false, "item": [{"ansible_loop_var": "item", "changed": true, "cmd": "blockdev --getss /dev
/mapper/vdo_nvme1n1 | grep -Po -q \"512\" && echo true || echo false\n", "delta": "0:00:00.006565", "end": "2020-12-17 23:24:28.551125", "failed": false, "invocation": {"module_args": {"_raw_params": "blockdev --getss /dev/mapper/vdo_nvme1n1 | grep -Po -q \"512\" && echo true || echo false\n", "_uses_shell": true, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "stdin_add_newline": true, "strip_empty_ends": true, "warn": true}}, "item": {"pvname": "/dev/mapper/vdo_nvme1n1", "vgname": "gluster_vg_nvme1n1"}, "rc": 0, "start": "2020-12-17 23:24:28.544560", "stderr": "blockdev: cannot open /dev/mapper/vdo_nvme1n1: No such file or directory", "stderr_lines": ["blockdev: cannot open /dev/mapper/vdo_nvme1n1: No such file or directory"], "stdout": "false", "stdout_lines": ["false"]}, {"ansible_loop_var": "item", "changed": true, "cmd": "blockdev --getss /dev/mapper/vdo_nvme2n1 | grep -Po -q \"4096\" && echo true || echo false\n", "delta":
"0:00:00.006516", "end": "2020-12-17 23:24:38.762054", "failed": false, "invocation": {"module_args": {"_raw_params": "blockdev --getss /dev/mapper/vdo_nvme2n1 | grep -Po -q \"4096\" && echo true || echo false\n", "_uses_shell": true, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "stdin_add_newline": true, "strip_empty_ends": true, "warn": true}}, "item": {"pvname": "/dev/mapper/vdo_nvme2n1", "vgname": "gluster_vg_nvme2n1"}, "rc": 0, "start": "2020-12-17 23:24:38.755538", "stderr": "blockdev: cannot open /dev/mapper/vdo_nvme2n1: No such file or directory", "stderr_lines": ["blockdev: cannot open /dev/mapper/vdo_nvme2n1: No such file or directory"], "stdout": "false", "stdout_lines": ["false"]}], "skip_reason": "Conditional result was False"}
skipping: [host3.fqdn.tld] => (item=[{'cmd': 'blockdev --getss /dev/mapper/vdo_nvme1n1 | grep -Po -q "512" && echo true || echo false\n', 'stdout': 'false', 'stderr': 'blockdev: cannot open /dev/mapper/vdo_nvme1n1: No such file or directory', 'rc': 0, 'start': '2020-12-17 23:24:28.544560', 'end': '2020-12-17 23:24:28.551125', 'delta': '0:00:00.006565', 'changed': True, 'invocation': {'module_args': {'_raw_params': 'blockdev --getss /dev/mapper/vdo_nvme1n1 | grep -Po -q "512" && echo true || echo false\n', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['false'], 'stderr_lines': ['blockdev: cannot open /dev/mapper/vdo_nvme1n1: No such file or directory'], 'failed': False, 'item': {'vgname': 'gluster_vg_nvme1n1', 'pvname': '/dev/mapper/vdo_nvme1n1'}, 'ansible_loop_var': 'item'}, {'cmd': 'blockdev --getss /dev/mapper/vdo_nvme1n1 |
grep -Po -q "4096" && echo true || echo false\n', 'stdout': 'false', 'stderr': 'blockdev: cannot open /dev/mapper/vdo_nvme1n1: No such file or directory', 'rc': 0, 'start': '2020-12-17 23:24:43.751742', 'end': '2020-12-17 23:24:43.758283', 'delta': '0:00:00.006541', 'changed': True, 'invocation': {'module_args': {'_raw_params': 'blockdev --getss /dev/mapper/vdo_nvme1n1 | grep -Po -q "4096" && echo true || echo false\n', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['false'], 'stderr_lines': ['blockdev: cannot open /dev/mapper/vdo_nvme1n1: No such file or directory'], 'failed': False, 'item': {'vgname': 'gluster_vg_nvme1n1', 'pvname': '/dev/mapper/vdo_nvme1n1'}, 'ansible_loop_var': 'item'}]) => {"ansible_loop_var": "item", "changed": false, "item": [{"ansible_loop_var": "item", "changed": true, "cmd": "blockdev --getss /dev
/mapper/vdo_nvme1n1 | grep -Po -q \"512\" && echo true || echo false\n", "delta": "0:00:00.006565", "end": "2020-12-17 23:24:28.551125", "failed": false, "invocation": {"module_args": {"_raw_params": "blockdev --getss /dev/mapper/vdo_nvme1n1 | grep -Po -q \"512\" && echo true || echo false\n", "_uses_shell": true, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "stdin_add_newline": true, "strip_empty_ends": true, "warn": true}}, "item": {"pvname": "/dev/mapper/vdo_nvme1n1", "vgname": "gluster_vg_nvme1n1"}, "rc": 0, "start": "2020-12-17 23:24:28.544560", "stderr": "blockdev: cannot open /dev/mapper/vdo_nvme1n1: No such file or directory", "stderr_lines": ["blockdev: cannot open /dev/mapper/vdo_nvme1n1: No such file or directory"], "stdout": "false", "stdout_lines": ["false"]}, {"ansible_loop_var": "item", "changed": true, "cmd": "blockdev --getss /dev/mapper/vdo_nvme1n1 | grep -Po -q \"4096\" && echo true || echo false\n", "delta":
"0:00:00.006541", "end": "2020-12-17 23:24:43.758283", "failed": false, "invocation": {"module_args": {"_raw_params": "blockdev --getss /dev/mapper/vdo_nvme1n1 | grep -Po -q \"4096\" && echo true || echo false\n", "_uses_shell": true, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "stdin_add_newline": true, "strip_empty_ends": true, "warn": true}}, "item": {"pvname": "/dev/mapper/vdo_nvme1n1", "vgname": "gluster_vg_nvme1n1"}, "rc": 0, "start": "2020-12-17 23:24:43.751742", "stderr": "blockdev: cannot open /dev/mapper/vdo_nvme1n1: No such file or directory", "stderr_lines": ["blockdev: cannot open /dev/mapper/vdo_nvme1n1: No such file or directory"], "stdout": "false", "stdout_lines": ["false"]}], "skip_reason": "Conditional result was False"}
TASK [Check if disks have logical block size of 512B] **************************
task path: /usr/share/cockpit/ovirt-dashboard/ansible/hc_wizard.yml:72
skipping: [host1.fqdn.tld] => (item={'vgname': 'gluster_vg_nvme0n1', 'pvname': '/dev/mapper/vdo_nvme0n1'}) => {"ansible_loop_var": "item", "changed": false, "item": {"pvname": "/dev/mapper/vdo_nvme0n1", "vgname": "gluster_vg_nvme0n1"}, "skip_reason": "Conditional result was False"}
skipping: [host1.fqdn.tld] => (item={'vgname': 'gluster_vg_nvme2n1', 'pvname': '/dev/mapper/vdo_nvme2n1'}) => {"ansible_loop_var": "item", "changed": false, "item": {"pvname": "/dev/mapper/vdo_nvme2n1", "vgname": "gluster_vg_nvme2n1"}, "skip_reason": "Conditional result was False"}
skipping: [host1.fqdn.tld] => (item={'vgname': 'gluster_vg_nvme1n1', 'pvname': '/dev/mapper/vdo_nvme1n1'}) => {"ansible_loop_var": "item", "changed": false, "item": {"pvname": "/dev/mapper/vdo_nvme1n1", "vgname": "gluster_vg_nvme1n1"}, "skip_reason": "Conditional result was False"}
skipping: [host2.fqdn.tld] => (item={'vgname': 'gluster_vg_nvme0n1', 'pvname': '/dev/mapper/vdo_nvme0n1'}) => {"ansible_loop_var": "item", "changed": false, "item": {"pvname": "/dev/mapper/vdo_nvme0n1", "vgname": "gluster_vg_nvme0n1"}, "skip_reason": "Conditional result was False"}
skipping: [host2.fqdn.tld] => (item={'vgname': 'gluster_vg_nvme2n1', 'pvname': '/dev/mapper/vdo_nvme2n1'}) => {"ansible_loop_var": "item", "changed": false, "item": {"pvname": "/dev/mapper/vdo_nvme2n1", "vgname": "gluster_vg_nvme2n1"}, "skip_reason": "Conditional result was False"}
skipping: [host2.fqdn.tld] => (item={'vgname': 'gluster_vg_nvme1n1', 'pvname': '/dev/mapper/vdo_nvme1n1'}) => {"ansible_loop_var": "item", "changed": false, "item": {"pvname": "/dev/mapper/vdo_nvme1n1", "vgname": "gluster_vg_nvme1n1"}, "skip_reason": "Conditional result was False"}
skipping: [host3.fqdn.tld] => (item={'vgname': 'gluster_vg_nvme0n1', 'pvname': '/dev/mapper/vdo_nvme0n1'}) => {"ansible_loop_var": "item", "changed": false, "item": {"pvname": "/dev/mapper/vdo_nvme0n1", "vgname": "gluster_vg_nvme0n1"}, "skip_reason": "Conditional result was False"}
skipping: [host3.fqdn.tld] => (item={'vgname': 'gluster_vg_nvme2n1', 'pvname': '/dev/mapper/vdo_nvme2n1'}) => {"ansible_loop_var": "item", "changed": false, "item": {"pvname": "/dev/mapper/vdo_nvme2n1", "vgname": "gluster_vg_nvme2n1"}, "skip_reason": "Conditional result was False"}
skipping: [host3.fqdn.tld] => (item={'vgname': 'gluster_vg_nvme1n1', 'pvname': '/dev/mapper/vdo_nvme1n1'}) => {"ansible_loop_var": "item", "changed": false, "item": {"pvname": "/dev/mapper/vdo_nvme1n1", "vgname": "gluster_vg_nvme1n1"}, "skip_reason": "Conditional result was False"}
TASK [Check if logical block size is 512 bytes] ********************************
task path: /usr/share/cockpit/ovirt-dashboard/ansible/hc_wizard.yml:80
skipping: [host1.fqdn.tld] => (item=Logical Block Size) => {"ansible_loop_var": "item", "changed": false, "item": {"ansible_loop_var": "item", "changed": false, "item": {"pvname": "/dev/mapper/vdo_nvme0n1", "vgname": "gluster_vg_nvme0n1"}, "skip_reason": "Conditional result was False", "skipped": true}, "skip_reason": "Conditional result was False"}
skipping: [host1.fqdn.tld] => (item=Logical Block Size) => {"ansible_loop_var": "item", "changed": false, "item": {"ansible_loop_var": "item", "changed": false, "item": {"pvname": "/dev/mapper/vdo_nvme2n1", "vgname": "gluster_vg_nvme2n1"}, "skip_reason": "Conditional result was False", "skipped": true}, "skip_reason": "Conditional result was False"}
skipping: [host1.fqdn.tld] => (item=Logical Block Size) => {"ansible_loop_var": "item", "changed": false, "item": {"ansible_loop_var": "item", "changed": false, "item": {"pvname": "/dev/mapper/vdo_nvme1n1", "vgname": "gluster_vg_nvme1n1"}, "skip_reason": "Conditional result was False", "skipped": true}, "skip_reason": "Conditional result was False"}
skipping: [host2.fqdn.tld] => (item=Logical Block Size) => {"ansible_loop_var": "item", "changed": false, "item": {"ansible_loop_var": "item", "changed": false, "item": {"pvname": "/dev/mapper/vdo_nvme0n1", "vgname": "gluster_vg_nvme0n1"}, "skip_reason": "Conditional result was False", "skipped": true}, "skip_reason": "Conditional result was False"}
skipping: [host2.fqdn.tld] => (item=Logical Block Size) => {"ansible_loop_var": "item", "changed": false, "item": {"ansible_loop_var": "item", "changed": false, "item": {"pvname": "/dev/mapper/vdo_nvme2n1", "vgname": "gluster_vg_nvme2n1"}, "skip_reason": "Conditional result was False", "skipped": true}, "skip_reason": "Conditional result was False"}
skipping: [host2.fqdn.tld] => (item=Logical Block Size) => {"ansible_loop_var": "item", "changed": false, "item": {"ansible_loop_var": "item", "changed": false, "item": {"pvname": "/dev/mapper/vdo_nvme1n1", "vgname": "gluster_vg_nvme1n1"}, "skip_reason": "Conditional result was False", "skipped": true}, "skip_reason": "Conditional result was False"}
skipping: [host3.fqdn.tld] => (item=Logical Block Size) => {"ansible_loop_var": "item", "changed": false, "item": {"ansible_loop_var": "item", "changed": false, "item": {"pvname": "/dev/mapper/vdo_nvme0n1", "vgname": "gluster_vg_nvme0n1"}, "skip_reason": "Conditional result was False", "skipped": true}, "skip_reason": "Conditional result was False"}
skipping: [host3.fqdn.tld] => (item=Logical Block Size) => {"ansible_loop_var": "item", "changed": false, "item": {"ansible_loop_var": "item", "changed": false, "item": {"pvname": "/dev/mapper/vdo_nvme2n1", "vgname": "gluster_vg_nvme2n1"}, "skip_reason": "Conditional result was False", "skipped": true}, "skip_reason": "Conditional result was False"}
skipping: [host3.fqdn.tld] => (item=Logical Block Size) => {"ansible_loop_var": "item", "changed": false, "item": {"ansible_loop_var": "item", "changed": false, "item": {"pvname": "/dev/mapper/vdo_nvme1n1", "vgname": "gluster_vg_nvme1n1"}, "skip_reason": "Conditional result was False", "skipped": true}, "skip_reason": "Conditional result was False"}
TASK [Get logical block size of VDO devices] ***********************************
task path: /usr/share/cockpit/ovirt-dashboard/ansible/hc_wizard.yml:92
skipping: [host1.fqdn.tld] => (item={'name': 'vdo_nvme0n1', 'device': '/dev/nvme0n1', 'slabsize': '2G', 'logicalsize': '1000G', 'blockmapcachesize': '128M', 'emulate512': 'off', 'writepolicy': 'auto', 'maxDiscardSize': '16M'}) => {"ansible_loop_var": "item", "changed": false, "item": {"blockmapcachesize": "128M", "device": "/dev/nvme0n1", "emulate512": "off", "logicalsize": "1000G", "maxDiscardSize": "16M", "name": "vdo_nvme0n1", "slabsize": "2G", "writepolicy": "auto"}, "skip_reason": "Conditional result was False"}
skipping: [host1.fqdn.tld] => (item={'name': 'vdo_nvme2n1', 'device': '/dev/nvme2n1', 'slabsize': '32G', 'logicalsize': '5000G', 'blockmapcachesize': '128M', 'emulate512': 'off', 'writepolicy': 'auto', 'maxDiscardSize': '16M'}) => {"ansible_loop_var": "item", "changed": false, "item": {"blockmapcachesize": "128M", "device": "/dev/nvme2n1", "emulate512": "off", "logicalsize": "5000G", "maxDiscardSize": "16M", "name": "vdo_nvme2n1", "slabsize": "32G", "writepolicy": "auto"}, "skip_reason": "Conditional result was False"}
skipping: [host1.fqdn.tld] => (item={'name': 'vdo_nvme1n1', 'device': '/dev/nvme1n1', 'slabsize': '32G', 'logicalsize': '5000G', 'blockmapcachesize': '128M', 'emulate512': 'off', 'writepolicy': 'auto', 'maxDiscardSize': '16M'}) => {"ansible_loop_var": "item", "changed": false, "item": {"blockmapcachesize": "128M", "device": "/dev/nvme1n1", "emulate512": "off", "logicalsize": "5000G", "maxDiscardSize": "16M", "name": "vdo_nvme1n1", "slabsize": "32G", "writepolicy": "auto"}, "skip_reason": "Conditional result was False"}
skipping: [host2.fqdn.tld] => (item={'name': 'vdo_nvme0n1', 'device': '/dev/nvme0n1', 'slabsize': '2G', 'logicalsize': '1000G', 'blockmapcachesize': '128M', 'emulate512': 'off', 'writepolicy': 'auto', 'maxDiscardSize': '16M'}) => {"ansible_loop_var": "item", "changed": false, "item": {"blockmapcachesize": "128M", "device": "/dev/nvme0n1", "emulate512": "off", "logicalsize": "1000G", "maxDiscardSize": "16M", "name": "vdo_nvme0n1", "slabsize": "2G", "writepolicy": "auto"}, "skip_reason": "Conditional result was False"}
skipping: [host2.fqdn.tld] => (item={'name': 'vdo_nvme2n1', 'device': '/dev/nvme2n1', 'slabsize': '32G', 'logicalsize': '5000G', 'blockmapcachesize': '128M', 'emulate512': 'off', 'writepolicy': 'auto', 'maxDiscardSize': '16M'}) => {"ansible_loop_var": "item", "changed": false, "item": {"blockmapcachesize": "128M", "device": "/dev/nvme2n1", "emulate512": "off", "logicalsize": "5000G", "maxDiscardSize": "16M", "name": "vdo_nvme2n1", "slabsize": "32G", "writepolicy": "auto"}, "skip_reason": "Conditional result was False"}
skipping: [host2.fqdn.tld] => (item={'name': 'vdo_nvme1n1', 'device': '/dev/nvme1n1', 'slabsize': '32G', 'logicalsize': '5000G', 'blockmapcachesize': '128M', 'emulate512': 'off', 'writepolicy': 'auto', 'maxDiscardSize': '16M'}) => {"ansible_loop_var": "item", "changed": false, "item": {"blockmapcachesize": "128M", "device": "/dev/nvme1n1", "emulate512": "off", "logicalsize": "5000G", "maxDiscardSize": "16M", "name": "vdo_nvme1n1", "slabsize": "32G", "writepolicy": "auto"}, "skip_reason": "Conditional result was False"}
skipping: [host3.fqdn.tld] => (item={'name': 'vdo_nvme0n1', 'device': '/dev/nvme0n1', 'slabsize': '2G', 'logicalsize': '1000G', 'blockmapcachesize': '128M', 'emulate512': 'off', 'writepolicy': 'auto', 'maxDiscardSize': '16M'}) => {"ansible_loop_var": "item", "changed": false, "item": {"blockmapcachesize": "128M", "device": "/dev/nvme0n1", "emulate512": "off", "logicalsize": "1000G", "maxDiscardSize": "16M", "name": "vdo_nvme0n1", "slabsize": "2G", "writepolicy": "auto"}, "skip_reason": "Conditional result was False"}
skipping: [host3.fqdn.tld] => (item={'name': 'vdo_nvme2n1', 'device': '/dev/nvme2n1', 'slabsize': '32G', 'logicalsize': '5000G', 'blockmapcachesize': '128M', 'emulate512': 'off', 'writepolicy': 'auto', 'maxDiscardSize': '16M'}) => {"ansible_loop_var": "item", "changed": false, "item": {"blockmapcachesize": "128M", "device": "/dev/nvme2n1", "emulate512": "off", "logicalsize": "5000G", "maxDiscardSize": "16M", "name": "vdo_nvme2n1", "slabsize": "32G", "writepolicy": "auto"}, "skip_reason": "Conditional result was False"}
skipping: [host3.fqdn.tld] => (item={'name': 'vdo_nvme1n1', 'device': '/dev/nvme1n1', 'slabsize': '32G', 'logicalsize': '5000G', 'blockmapcachesize': '128M', 'emulate512': 'off', 'writepolicy': 'auto', 'maxDiscardSize': '16M'}) => {"ansible_loop_var": "item", "changed": false, "item": {"blockmapcachesize": "128M", "device": "/dev/nvme1n1", "emulate512": "off", "logicalsize": "5000G", "maxDiscardSize": "16M", "name": "vdo_nvme1n1", "slabsize": "32G", "writepolicy": "auto"}, "skip_reason": "Conditional result was False"}
TASK [Check if logical block size is 512 bytes for VDO devices] ****************
task path: /usr/share/cockpit/ovirt-dashboard/ansible/hc_wizard.yml:99
skipping: [host1.fqdn.tld] => (item=Logical Block Size) => {"ansible_loop_var": "item", "changed": false, "item": {"ansible_loop_var": "item", "changed": false, "item": {"blockmapcachesize": "128M", "device": "/dev/nvme0n1", "emulate512": "off", "logicalsize": "1000G", "maxDiscardSize": "16M", "name": "vdo_nvme0n1", "slabsize": "2G", "writepolicy": "auto"}, "skip_reason": "Conditional result was False", "skipped": true}, "skip_reason": "Conditional result was False"}
skipping: [host1.fqdn.tld] => (item=Logical Block Size) => {"ansible_loop_var": "item", "changed": false, "item": {"ansible_loop_var": "item", "changed": false, "item": {"blockmapcachesize": "128M", "device": "/dev/nvme2n1", "emulate512": "off", "logicalsize": "5000G", "maxDiscardSize": "16M", "name": "vdo_nvme2n1", "slabsize": "32G", "writepolicy": "auto"}, "skip_reason": "Conditional result was False", "skipped": true}, "skip_reason": "Conditional result was False"}
skipping: [host1.fqdn.tld] => (item=Logical Block Size) => {"ansible_loop_var": "item", "changed": false, "item": {"ansible_loop_var": "item", "changed": false, "item": {"blockmapcachesize": "128M", "device": "/dev/nvme1n1", "emulate512": "off", "logicalsize": "5000G", "maxDiscardSize": "16M", "name": "vdo_nvme1n1", "slabsize": "32G", "writepolicy": "auto"}, "skip_reason": "Conditional result was False", "skipped": true}, "skip_reason": "Conditional result was False"}
skipping: [host2.fqdn.tld] => (item=Logical Block Size) => {"ansible_loop_var": "item", "changed": false, "item": {"ansible_loop_var": "item", "changed": false, "item": {"blockmapcachesize": "128M", "device": "/dev/nvme0n1", "emulate512": "off", "logicalsize": "1000G", "maxDiscardSize": "16M", "name": "vdo_nvme0n1", "slabsize": "2G", "writepolicy": "auto"}, "skip_reason": "Conditional result was False", "skipped": true}, "skip_reason": "Conditional result was False"}
skipping: [host2.fqdn.tld] => (item=Logical Block Size) => {"ansible_loop_var": "item", "changed": false, "item": {"ansible_loop_var": "item", "changed": false, "item": {"blockmapcachesize": "128M", "device": "/dev/nvme2n1", "emulate512": "off", "logicalsize": "5000G", "maxDiscardSize": "16M", "name": "vdo_nvme2n1", "slabsize": "32G", "writepolicy": "auto"}, "skip_reason": "Conditional result was False", "skipped": true}, "skip_reason": "Conditional result was False"}
skipping: [host2.fqdn.tld] => (item=Logical Block Size) => {"ansible_loop_var": "item", "changed": false, "item": {"ansible_loop_var": "item", "changed": false, "item": {"blockmapcachesize": "128M", "device": "/dev/nvme1n1", "emulate512": "off", "logicalsize": "5000G", "maxDiscardSize": "16M", "name": "vdo_nvme1n1", "slabsize": "32G", "writepolicy": "auto"}, "skip_reason": "Conditional result was False", "skipped": true}, "skip_reason": "Conditional result was False"}
skipping: [host3.fqdn.tld] => (item=Logical Block Size) => {"ansible_loop_var": "item", "changed": false, "item": {"ansible_loop_var": "item", "changed": false, "item": {"blockmapcachesize": "128M", "device": "/dev/nvme0n1", "emulate512": "off", "logicalsize": "1000G", "maxDiscardSize": "16M", "name": "vdo_nvme0n1", "slabsize": "2G", "writepolicy": "auto"}, "skip_reason": "Conditional result was False", "skipped": true}, "skip_reason": "Conditional result was False"}
skipping: [host3.fqdn.tld] => (item=Logical Block Size) => {"ansible_loop_var": "item", "changed": false, "item": {"ansible_loop_var": "item", "changed": false, "item": {"blockmapcachesize": "128M", "device": "/dev/nvme2n1", "emulate512": "off", "logicalsize": "5000G", "maxDiscardSize": "16M", "name": "vdo_nvme2n1", "slabsize": "32G", "writepolicy": "auto"}, "skip_reason": "Conditional result was False", "skipped": true}, "skip_reason": "Conditional result was False"}
skipping: [host3.fqdn.tld] => (item=Logical Block Size) => {"ansible_loop_var": "item", "changed": false, "item": {"ansible_loop_var": "item", "changed": false, "item": {"blockmapcachesize": "128M", "device": "/dev/nvme1n1", "emulate512": "off", "logicalsize": "5000G", "maxDiscardSize": "16M", "name": "vdo_nvme1n1", "slabsize": "32G", "writepolicy": "auto"}, "skip_reason": "Conditional result was False", "skipped": true}, "skip_reason": "Conditional result was False"}
META: ran handlers
TASK [gluster.infra/roles/firewall_config : Start firewalld if not already started] ***
task path: /etc/ansible/roles/gluster.infra/roles/firewall_config/tasks/main.yml:3
ok: [host1.fqdn.tld] => {"changed": false, "name": "firewalld", "state": "started", "status": {"ActiveEnterTimestamp": "Thu 2020-12-17 23:08:14 UTC", "ActiveEnterTimestampMonotonic": "193758194", "ActiveExitTimestampMonotonic": "0", "ActiveState": "active", "After": "polkit.service basic.target dbus.socket sysinit.target system.slice dbus.service", "AllowIsolate": "no", "AllowedCPUs": "", "AllowedMemoryNodes": "", "AmbientCapabilities": "", "AssertResult": "yes", "AssertTimestamp": "Thu 2020-12-17 23:08:14 UTC", "AssertTimestampMonotonic": "193433680", "Before": "network-pre.target shutdown.target multi-user.target", "BlockIOAccounting": "no", "BlockIOWeight": "[not set]", "BusName": "org.fedoraproject.FirewallD1", "CPUAccounting": "no", "CPUAffinity": "", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "[not set]", "CPUUsageNSec": "[not set]", "CPUWeight": "[not set]", "CacheDirectoryMode": "0
755", "CanIsolate": "no", "CanReload": "yes", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend", "CollectMode": "inactive", "ConditionResult": "yes", "ConditionTimestamp": "Thu 2020-12-17 23:08:14 UTC", "ConditionTimestampMonotonic": "193433679", "ConfigurationDirectoryMode": "0755", "Conflicts": "ipset.service ebtables.service shutdown.target iptables.service ip6tables.service", "ControlGroup": "/system.slice/firewalld.service", "ControlPID": "0", "De
faultDependencies": "yes", "Delegate": "no", "Description": "firewalld - dynamic firewall daemon", "DevicePolicy": "auto", "Documentation": "man:firewalld(1)", "DynamicUser": "no", "EffectiveCPUs": "", "EffectiveMemoryNodes": "", "EnvironmentFiles": "/etc/sysconfig/firewalld (ignore_errors=yes)", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "3023", "ExecMainStartTimestamp": "Thu 2020-12-17 23:08:14 UTC", "ExecMainStartTimestampMonotonic": "193435493", "ExecMainStatus": "0", "ExecReload": "{ path=/bin/kill ; argv[]=/bin/kill -HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStart": "{ path=/usr/sbin/firewalld ; argv[]=/usr/sbin/firewalld --nofork --nopid $FIREWALLD_ARGS ; ignore_errors=no ; start_time=[Thu 2020-12-17 23:08:14 UTC] ; stop_time=[n/a] ; pid=3023 ; code=(null) ; status=0/0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/usr/lib/systemd/system/firewa
lld.service", "GID": "[not set]", "GuessMainPID": "yes", "IOAccounting": "no", "IOSchedulingClass": "0", "IOSchedulingPriority": "0", "IOWeight": "[not set]", "IPAccounting": "no", "IPEgressBytes": "18446744073709551615", "IPEgressPackets": "18446744073709551615", "IPIngressBytes": "18446744073709551615", "IPIngressPackets": "18446744073709551615", "Id": "firewalld.service", "IgnoreOnIsolate": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestamp": "Thu 2020-12-17 23:08:14 UTC", "InactiveExitTimestampMonotonic": "193435581", "InvocationID": "a50b256547484bf3a612065ba32bf415", "JobRunningTimeoutUSec": "infinity", "JobTimeoutAction": "none", "JobTimeoutUSec": "infinity", "KeyringMode": "private", "KillMode": "mixed", "KillSignal": "15", "LimitAS": "infinity", "LimitASSoft": "infinity", "LimitCORE": "infinity", "LimitCORESoft": "infinity", "LimitCPU": "infinity", "LimitCPUSoft": "infinity", "LimitDATA": "infinity", "LimitDATASoft": "infinity", "Li
mitFSIZE": "infinity", "LimitFSIZESoft": "infinity", "LimitLOCKS": "infinity", "LimitLOCKSSoft": "infinity", "LimitMEMLOCK": "65536", "LimitMEMLOCKSoft": "65536", "LimitMSGQUEUE": "819200", "LimitMSGQUEUESoft": "819200", "LimitNICE": "0", "LimitNICESoft": "0", "LimitNOFILE": "262144", "LimitNOFILESoft": "1024", "LimitNPROC": "1540534", "LimitNPROCSoft": "1540534", "LimitRSS": "infinity", "LimitRSSSoft": "infinity", "LimitRTPRIO": "0", "LimitRTPRIOSoft": "0", "LimitRTTIME": "infinity", "LimitRTTIMESoft": "infinity", "LimitSIGPENDING": "1540534", "LimitSIGPENDINGSoft": "1540534", "LimitSTACK": "infinity", "LimitSTACKSoft": "8388608", "LoadState": "loaded", "LockPersonality": "no", "LogLevelMax": "-1", "LogRateLimitBurst": "0", "LogRateLimitIntervalUSec": "0", "LogsDirectoryMode": "0755", "MainPID": "3023", "MemoryAccounting": "yes", "MemoryCurrent": "40341504", "MemoryDenyWriteExecute": "no", "MemoryHigh": "infinity", "MemoryLimit": "infinity", "MemoryLow": "0", "MemoryMax": "infinity
", "MemorySwapMax": "infinity", "MountAPIVFS": "no", "MountFlags": "", "NFileDescriptorStore": "0", "NRestarts": "0", "NUMAMask": "", "NUMAPolicy": "n/a", "Names": "firewalld.service dbus-org.fedoraproject.FirewallD1.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "none", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "Perpetual": "no", "PrivateDevices": "no", "PrivateMounts": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "PrivateUsers": "no", "ProtectControlGroups": "no", "ProtectHome": "no", "ProtectKernelModules": "no", "ProtectKernelTunables": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "RemoveIPC": "no", "Requires": "dbus.socket system.slice sysinit.target", "Restart": "no", "RestartUSec": "100ms", "RestrictNamespaces": "no", "RestrictRealtime": "no", "RestrictSUIDSGID": "no", "Result": "success", "RootDirectorySt
artOnly": "no", "RuntimeDirectoryMode": "0755", "RuntimeDirectoryPreserve": "no", "RuntimeMaxUSec": "infinity", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "null", "StandardInput": "null", "StandardInputData": "", "StandardOutput": "null", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitIntervalUSec": "10s", "StartupBlockIOWeight": "[not set]", "StartupCPUShares": "[not set]", "StartupCPUWeight": "[not set]", "StartupIOWeight": "[not set]", "StateChangeTimestamp": "Thu 2020-12-17 23:08:14 UTC", "StateChangeTimestampMonotonic": "193758194", "StateDirectoryMode": "0755", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "running", "SuccessAction": "none", "SyslogFacility": "3", "SyslogLevel": "6", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "yes", "TasksCurren
t": "2", "TasksMax": "2464855", "TimeoutStartUSec": "1min 30s", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "dbus", "UID": "[not set]", "UMask": "0022", "UnitFilePreset": "enabled", "UnitFileState": "enabled", "UtmpMode": "init", "WantedBy": "multi-user.target", "Wants": "network-pre.target", "WatchdogTimestamp": "Thu 2020-12-17 23:08:14 UTC", "WatchdogTimestampMonotonic": "193758192", "WatchdogUSec": "0"}}
ok: [host2.fqdn.tld] => {"changed": false, "name": "firewalld", "state": "started", "status": {"ActiveEnterTimestamp": "Thu 2020-12-17 23:08:43 UTC", "ActiveEnterTimestampMonotonic": "225997069", "ActiveExitTimestampMonotonic": "0", "ActiveState": "active", "After": "dbus.service basic.target sysinit.target polkit.service dbus.socket system.slice", "AllowIsolate": "no", "AllowedCPUs": "", "AllowedMemoryNodes": "", "AmbientCapabilities": "", "AssertResult": "yes", "AssertTimestamp": "Thu 2020-12-17 23:08:42 UTC", "AssertTimestampMonotonic": "225681875", "Before": "multi-user.target network-pre.target shutdown.target", "BlockIOAccounting": "no", "BlockIOWeight": "[not set]", "BusName": "org.fedoraproject.FirewallD1", "CPUAccounting": "no", "CPUAffinity": "", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "[not set]", "CPUUsageNSec": "[not set]", "CPUWeight": "[not set]", "CacheDirectoryMode": "0
755", "CanIsolate": "no", "CanReload": "yes", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend", "CollectMode": "inactive", "ConditionResult": "yes", "ConditionTimestamp": "Thu 2020-12-17 23:08:42 UTC", "ConditionTimestampMonotonic": "225681868", "ConfigurationDirectoryMode": "0755", "Conflicts": "iptables.service ip6tables.service ebtables.service ipset.service shutdown.target", "ControlGroup": "/system.slice/firewalld.service", "ControlPID": "0", "De
faultDependencies": "yes", "Delegate": "no", "Description": "firewalld - dynamic firewall daemon", "DevicePolicy": "auto", "Documentation": "man:firewalld(1)", "DynamicUser": "no", "EffectiveCPUs": "", "EffectiveMemoryNodes": "", "EnvironmentFiles": "/etc/sysconfig/firewalld (ignore_errors=yes)", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "3020", "ExecMainStartTimestamp": "Thu 2020-12-17 23:08:42 UTC", "ExecMainStartTimestampMonotonic": "225683565", "ExecMainStatus": "0", "ExecReload": "{ path=/bin/kill ; argv[]=/bin/kill -HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStart": "{ path=/usr/sbin/firewalld ; argv[]=/usr/sbin/firewalld --nofork --nopid $FIREWALLD_ARGS ; ignore_errors=no ; start_time=[Thu 2020-12-17 23:08:42 UTC] ; stop_time=[n/a] ; pid=3020 ; code=(null) ; status=0/0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/usr/lib/systemd/system/firewa
lld.service", "GID": "[not set]", "GuessMainPID": "yes", "IOAccounting": "no", "IOSchedulingClass": "0", "IOSchedulingPriority": "0", "IOWeight": "[not set]", "IPAccounting": "no", "IPEgressBytes": "18446744073709551615", "IPEgressPackets": "18446744073709551615", "IPIngressBytes": "18446744073709551615", "IPIngressPackets": "18446744073709551615", "Id": "firewalld.service", "IgnoreOnIsolate": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestamp": "Thu 2020-12-17 23:08:42 UTC", "InactiveExitTimestampMonotonic": "225683629", "InvocationID": "4d8ece8f705543c8ba0fa0a3135a1c2d", "JobRunningTimeoutUSec": "infinity", "JobTimeoutAction": "none", "JobTimeoutUSec": "infinity", "KeyringMode": "private", "KillMode": "mixed", "KillSignal": "15", "LimitAS": "infinity", "LimitASSoft": "infinity", "LimitCORE": "infinity", "LimitCORESoft": "infinity", "LimitCPU": "infinity", "LimitCPUSoft": "infinity", "LimitDATA": "infinity", "LimitDATASoft": "infinity", "Li
mitFSIZE": "infinity", "LimitFSIZESoft": "infinity", "LimitLOCKS": "infinity", "LimitLOCKSSoft": "infinity", "LimitMEMLOCK": "65536", "LimitMEMLOCKSoft": "65536", "LimitMSGQUEUE": "819200", "LimitMSGQUEUESoft": "819200", "LimitNICE": "0", "LimitNICESoft": "0", "LimitNOFILE": "262144", "LimitNOFILESoft": "1024", "LimitNPROC": "1540534", "LimitNPROCSoft": "1540534", "LimitRSS": "infinity", "LimitRSSSoft": "infinity", "LimitRTPRIO": "0", "LimitRTPRIOSoft": "0", "LimitRTTIME": "infinity", "LimitRTTIMESoft": "infinity", "LimitSIGPENDING": "1540534", "LimitSIGPENDINGSoft": "1540534", "LimitSTACK": "infinity", "LimitSTACKSoft": "8388608", "LoadState": "loaded", "LockPersonality": "no", "LogLevelMax": "-1", "LogRateLimitBurst": "0", "LogRateLimitIntervalUSec": "0", "LogsDirectoryMode": "0755", "MainPID": "3020", "MemoryAccounting": "yes", "MemoryCurrent": "43073536", "MemoryDenyWriteExecute": "no", "MemoryHigh": "infinity", "MemoryLimit": "infinity", "MemoryLow": "0", "MemoryMax": "infinity
", "MemorySwapMax": "infinity", "MountAPIVFS": "no", "MountFlags": "", "NFileDescriptorStore": "0", "NRestarts": "0", "NUMAMask": "", "NUMAPolicy": "n/a", "Names": "firewalld.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "none", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "Perpetual": "no", "PrivateDevices": "no", "PrivateMounts": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "PrivateUsers": "no", "ProtectControlGroups": "no", "ProtectHome": "no", "ProtectKernelModules": "no", "ProtectKernelTunables": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "RemoveIPC": "no", "Requires": "dbus.socket sysinit.target system.slice", "Restart": "no", "RestartUSec": "100ms", "RestrictNamespaces": "no", "RestrictRealtime": "no", "RestrictSUIDSGID": "no", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0
755", "RuntimeDirectoryPreserve": "no", "RuntimeMaxUSec": "infinity", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "null", "StandardInput": "null", "StandardInputData": "", "StandardOutput": "null", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitIntervalUSec": "10s", "StartupBlockIOWeight": "[not set]", "StartupCPUShares": "[not set]", "StartupCPUWeight": "[not set]", "StartupIOWeight": "[not set]", "StateChangeTimestamp": "Thu 2020-12-17 23:08:43 UTC", "StateChangeTimestampMonotonic": "225997069", "StateDirectoryMode": "0755", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "running", "SuccessAction": "none", "SyslogFacility": "3", "SyslogLevel": "6", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "yes", "TasksCurrent": "2", "TasksMax": "2464855", "TimeoutSt
artUSec": "1min 30s", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "dbus", "UID": "[not set]", "UMask": "0022", "UnitFilePreset": "enabled", "UnitFileState": "enabled", "UtmpMode": "init", "WantedBy": "multi-user.target", "Wants": "network-pre.target", "WatchdogTimestamp": "Thu 2020-12-17 23:08:43 UTC", "WatchdogTimestampMonotonic": "225997066", "WatchdogUSec": "0"}}
ok: [host3.fqdn.tld] => {"changed": false, "name": "firewalld", "state": "started", "status": {"ActiveEnterTimestamp": "Thu 2020-12-17 23:09:24 UTC", "ActiveEnterTimestampMonotonic": "262936660", "ActiveExitTimestampMonotonic": "0", "ActiveState": "active", "After": "basic.target polkit.service sysinit.target dbus.socket system.slice dbus.service", "AllowIsolate": "no", "AllowedCPUs": "", "AllowedMemoryNodes": "", "AmbientCapabilities": "", "AssertResult": "yes", "AssertTimestamp": "Thu 2020-12-17 23:09:24 UTC", "AssertTimestampMonotonic": "262572795", "Before": "multi-user.target network-pre.target shutdown.target", "BlockIOAccounting": "no", "BlockIOWeight": "[not set]", "BusName": "org.fedoraproject.FirewallD1", "CPUAccounting": "no", "CPUAffinity": "", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "[not set]", "CPUUsageNSec": "[not set]", "CPUWeight": "[not set]", "CacheDirectoryMode": "0
755", "CanIsolate": "no", "CanReload": "yes", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend", "CollectMode": "inactive", "ConditionResult": "yes", "ConditionTimestamp": "Thu 2020-12-17 23:09:24 UTC", "ConditionTimestampMonotonic": "262572794", "ConfigurationDirectoryMode": "0755", "Conflicts": "iptables.service shutdown.target ip6tables.service ebtables.service ipset.service", "ControlGroup": "/system.slice/firewalld.service", "ControlPID": "0", "De
faultDependencies": "yes", "Delegate": "no", "Description": "firewalld - dynamic firewall daemon", "DevicePolicy": "auto", "Documentation": "man:firewalld(1)", "DynamicUser": "no", "EffectiveCPUs": "", "EffectiveMemoryNodes": "", "EnvironmentFiles": "/etc/sysconfig/firewalld (ignore_errors=yes)", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "3027", "ExecMainStartTimestamp": "Thu 2020-12-17 23:09:24 UTC", "ExecMainStartTimestampMonotonic": "262574877", "ExecMainStatus": "0", "ExecReload": "{ path=/bin/kill ; argv[]=/bin/kill -HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "ExecStart": "{ path=/usr/sbin/firewalld ; argv[]=/usr/sbin/firewalld --nofork --nopid $FIREWALLD_ARGS ; ignore_errors=no ; start_time=[Thu 2020-12-17 23:09:24 UTC] ; stop_time=[n/a] ; pid=3027 ; code=(null) ; status=0/0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/usr/lib/systemd/system/firewa
lld.service", "GID": "[not set]", "GuessMainPID": "yes", "IOAccounting": "no", "IOSchedulingClass": "0", "IOSchedulingPriority": "0", "IOWeight": "[not set]", "IPAccounting": "no", "IPEgressBytes": "18446744073709551615", "IPEgressPackets": "18446744073709551615", "IPIngressBytes": "18446744073709551615", "IPIngressPackets": "18446744073709551615", "Id": "firewalld.service", "IgnoreOnIsolate": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestamp": "Thu 2020-12-17 23:09:24 UTC", "InactiveExitTimestampMonotonic": "262575060", "InvocationID": "83893abe47f248fcbd783d797dc47196", "JobRunningTimeoutUSec": "infinity", "JobTimeoutAction": "none", "JobTimeoutUSec": "infinity", "KeyringMode": "private", "KillMode": "mixed", "KillSignal": "15", "LimitAS": "infinity", "LimitASSoft": "infinity", "LimitCORE": "infinity", "LimitCORESoft": "infinity", "LimitCPU": "infinity", "LimitCPUSoft": "infinity", "LimitDATA": "infinity", "LimitDATASoft": "infinity", "Li
mitFSIZE": "infinity", "LimitFSIZESoft": "infinity", "LimitLOCKS": "infinity", "LimitLOCKSSoft": "infinity", "LimitMEMLOCK": "65536", "LimitMEMLOCKSoft": "65536", "LimitMSGQUEUE": "819200", "LimitMSGQUEUESoft": "819200", "LimitNICE": "0", "LimitNICESoft": "0", "LimitNOFILE": "262144", "LimitNOFILESoft": "1024", "LimitNPROC": "1540534", "LimitNPROCSoft": "1540534", "LimitRSS": "infinity", "LimitRSSSoft": "infinity", "LimitRTPRIO": "0", "LimitRTPRIOSoft": "0", "LimitRTTIME": "infinity", "LimitRTTIMESoft": "infinity", "LimitSIGPENDING": "1540534", "LimitSIGPENDINGSoft": "1540534", "LimitSTACK": "infinity", "LimitSTACKSoft": "8388608", "LoadState": "loaded", "LockPersonality": "no", "LogLevelMax": "-1", "LogRateLimitBurst": "0", "LogRateLimitIntervalUSec": "0", "LogsDirectoryMode": "0755", "MainPID": "3027", "MemoryAccounting": "yes", "MemoryCurrent": "41521152", "MemoryDenyWriteExecute": "no", "MemoryHigh": "infinity", "MemoryLimit": "infinity", "MemoryLow": "0", "MemoryMax": "infinity
", "MemorySwapMax": "infinity", "MountAPIVFS": "no", "MountFlags": "", "NFileDescriptorStore": "0", "NRestarts": "0", "NUMAMask": "", "NUMAPolicy": "n/a", "Names": "firewalld.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "none", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "Perpetual": "no", "PrivateDevices": "no", "PrivateMounts": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "PrivateUsers": "no", "ProtectControlGroups": "no", "ProtectHome": "no", "ProtectKernelModules": "no", "ProtectKernelTunables": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "RemoveIPC": "no", "Requires": "system.slice sysinit.target dbus.socket", "Restart": "no", "RestartUSec": "100ms", "RestrictNamespaces": "no", "RestrictRealtime": "no", "RestrictSUIDSGID": "no", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0
755", "RuntimeDirectoryPreserve": "no", "RuntimeMaxUSec": "infinity", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "null", "StandardInput": "null", "StandardInputData": "", "StandardOutput": "null", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitIntervalUSec": "10s", "StartupBlockIOWeight": "[not set]", "StartupCPUShares": "[not set]", "StartupCPUWeight": "[not set]", "StartupIOWeight": "[not set]", "StateChangeTimestamp": "Thu 2020-12-17 23:09:24 UTC", "StateChangeTimestampMonotonic": "262936660", "StateDirectoryMode": "0755", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "running", "SuccessAction": "none", "SyslogFacility": "3", "SyslogLevel": "6", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "yes", "TasksCurrent": "2", "TasksMax": "2464855", "TimeoutSt
artUSec": "1min 30s", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "dbus", "UID": "[not set]", "UMask": "0022", "UnitFilePreset": "enabled", "UnitFileState": "enabled", "UtmpMode": "init", "WantedBy": "multi-user.target", "Wants": "network-pre.target", "WatchdogTimestamp": "Thu 2020-12-17 23:09:24 UTC", "WatchdogTimestampMonotonic": "262936658", "WatchdogUSec": "0"}}
TASK [gluster.infra/roles/firewall_config : check if required variables are set] ***
task path: /etc/ansible/roles/gluster.infra/roles/firewall_config/tasks/main.yml:8
skipping: [host1.fqdn.tld] => {"changed": false, "skip_reason": "Conditional result was False"}
skipping: [host2.fqdn.tld] => {"changed": false, "skip_reason": "Conditional result was False"}
skipping: [host3.fqdn.tld] => {"changed": false, "skip_reason": "Conditional result was False"}
TASK [gluster.infra/roles/firewall_config : Open/Close firewalld ports] ********
task path: /etc/ansible/roles/gluster.infra/roles/firewall_config/tasks/main.yml:13
ok: [host3.fqdn.tld] => (item=2049/tcp) => {"ansible_loop_var": "item", "changed": false, "item": "2049/tcp", "msg": "Permanent and Non-Permanent(immediate) operation"}
ok: [host2.fqdn.tld] => (item=2049/tcp) => {"ansible_loop_var": "item", "changed": false, "item": "2049/tcp", "msg": "Permanent and Non-Permanent(immediate) operation"}
ok: [host1.fqdn.tld] => (item=2049/tcp) => {"ansible_loop_var": "item", "changed": false, "item": "2049/tcp", "msg": "Permanent and Non-Permanent(immediate) operation"}
ok: [host2.fqdn.tld] => (item=54321/tcp) => {"ansible_loop_var": "item", "changed": false, "item": "54321/tcp", "msg": "Permanent and Non-Permanent(immediate) operation"}
ok: [host1.fqdn.tld] => (item=54321/tcp) => {"ansible_loop_var": "item", "changed": false, "item": "54321/tcp", "msg": "Permanent and Non-Permanent(immediate) operation"}
ok: [host3.fqdn.tld] => (item=54321/tcp) => {"ansible_loop_var": "item", "changed": false, "item": "54321/tcp", "msg": "Permanent and Non-Permanent(immediate) operation"}
ok: [host2.fqdn.tld] => (item=5900/tcp) => {"ansible_loop_var": "item", "changed": false, "item": "5900/tcp", "msg": "Permanent and Non-Permanent(immediate) operation"}
ok: [host1.fqdn.tld] => (item=5900/tcp) => {"ansible_loop_var": "item", "changed": false, "item": "5900/tcp", "msg": "Permanent and Non-Permanent(immediate) operation"}
ok: [host3.fqdn.tld] => (item=5900/tcp) => {"ansible_loop_var": "item", "changed": false, "item": "5900/tcp", "msg": "Permanent and Non-Permanent(immediate) operation"}
ok: [host2.fqdn.tld] => (item=5900-6923/tcp) => {"ansible_loop_var": "item", "changed": false, "item": "5900-6923/tcp", "msg": "Permanent and Non-Permanent(immediate) operation"}
ok: [host1.fqdn.tld] => (item=5900-6923/tcp) => {"ansible_loop_var": "item", "changed": false, "item": "5900-6923/tcp", "msg": "Permanent and Non-Permanent(immediate) operation"}
ok: [host3.fqdn.tld] => (item=5900-6923/tcp) => {"ansible_loop_var": "item", "changed": false, "item": "5900-6923/tcp", "msg": "Permanent and Non-Permanent(immediate) operation"}
ok: [host2.fqdn.tld] => (item=5666/tcp) => {"ansible_loop_var": "item", "changed": false, "item": "5666/tcp", "msg": "Permanent and Non-Permanent(immediate) operation"}
ok: [host3.fqdn.tld] => (item=5666/tcp) => {"ansible_loop_var": "item", "changed": false, "item": "5666/tcp", "msg": "Permanent and Non-Permanent(immediate) operation"}
ok: [host1.fqdn.tld] => (item=5666/tcp) => {"ansible_loop_var": "item", "changed": false, "item": "5666/tcp", "msg": "Permanent and Non-Permanent(immediate) operation"}
ok: [host2.fqdn.tld] => (item=16514/tcp) => {"ansible_loop_var": "item", "changed": false, "item": "16514/tcp", "msg": "Permanent and Non-Permanent(immediate) operation"}
ok: [host3.fqdn.tld] => (item=16514/tcp) => {"ansible_loop_var": "item", "changed": false, "item": "16514/tcp", "msg": "Permanent and Non-Permanent(immediate) operation"}
ok: [host1.fqdn.tld] => (item=16514/tcp) => {"ansible_loop_var": "item", "changed": false, "item": "16514/tcp", "msg": "Permanent and Non-Permanent(immediate) operation"}
TASK [gluster.infra/roles/firewall_config : Add/Delete services to firewalld rules] ***
task path: /etc/ansible/roles/gluster.infra/roles/firewall_config/tasks/main.yml:24
ok: [host2.fqdn.tld] => (item=glusterfs) => {"ansible_loop_var": "item", "changed": false, "item": "glusterfs", "msg": "Permanent and Non-Permanent(immediate) operation"}
ok: [host1.fqdn.tld] => (item=glusterfs) => {"ansible_loop_var": "item", "changed": false, "item": "glusterfs", "msg": "Permanent and Non-Permanent(immediate) operation"}
ok: [host3.fqdn.tld] => (item=glusterfs) => {"ansible_loop_var": "item", "changed": false, "item": "glusterfs", "msg": "Permanent and Non-Permanent(immediate) operation"}
TASK [gluster.infra/roles/backend_setup : Check if vdsm-python package is installed or not] ***
task path: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/main.yml:11
changed: [host1.fqdn.tld] => {"changed": true, "cmd": ["rpm", "-q", "vdsm-python"], "delta": "0:00:00.014663", "end": "2020-12-17 23:25:31.644227", "rc": 0, "start": "2020-12-17 23:25:31.629564", "stderr": "", "stderr_lines": [], "stdout": "vdsm-python-4.40.35.1-1.el8.noarch", "stdout_lines": ["vdsm-python-4.40.35.1-1.el8.noarch"]}
changed: [host2.fqdn.tld] => {"changed": true, "cmd": ["rpm", "-q", "vdsm-python"], "delta": "0:00:00.014344", "end": "2020-12-17 23:25:31.682475", "rc": 0, "start": "2020-12-17 23:25:31.668131", "stderr": "", "stderr_lines": [], "stdout": "vdsm-python-4.40.35.1-1.el8.noarch", "stdout_lines": ["vdsm-python-4.40.35.1-1.el8.noarch"]}
changed: [host3.fqdn.tld] => {"changed": true, "cmd": ["rpm", "-q", "vdsm-python"], "delta": "0:00:00.016546", "end": "2020-12-17 23:25:31.727142", "rc": 0, "start": "2020-12-17 23:25:31.710596", "stderr": "", "stderr_lines": [], "stdout": "vdsm-python-4.40.35.1-1.el8.noarch", "stdout_lines": ["vdsm-python-4.40.35.1-1.el8.noarch"]}
TASK [gluster.infra/roles/backend_setup : Remove the existing LVM filter] ******
task path: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/lvm_exclude_filter.yml:2
skipping: [host1.fqdn.tld] => {"changed": false, "skip_reason": "Conditional result was False"}
skipping: [host2.fqdn.tld] => {"changed": false, "skip_reason": "Conditional result was False"}
skipping: [host3.fqdn.tld] => {"changed": false, "skip_reason": "Conditional result was False"}
TASK [gluster.infra/roles/backend_setup : Check that the multipath.conf exists] ***
task path: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/blacklist_mpath_devices.yml:3
ok: [host1.fqdn.tld] => {"changed": false, "stat": {"atime": 1608245965.7761014, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 16, "charset": "us-ascii", "checksum": "da2254ee7938e2ca05dc3eb865fcc3ce061dbf69", "ctime": 1608245958.6684368, "dev": 64777, "device_type": 0, "executable": false, "exists": true, "gid": 0, "gr_name": "root", "inode": 33554645, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "text/plain", "mode": "0644", "mtime": 1608151373.42196, "nlink": 1, "path": "/etc/multipath.conf", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 6556, "uid": 0, "version": "1158382074", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}}
ok: [host2.fqdn.tld] => {"changed": false, "stat": {"atime": 1608245895.3097472, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 16, "charset": "us-ascii", "checksum": "da2254ee7938e2ca05dc3eb865fcc3ce061dbf69", "ctime": 1608245888.427173, "dev": 64777, "device_type": 0, "executable": false, "exists": true, "gid": 0, "gr_name": "root", "inode": 33554645, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "text/plain", "mode": "0644", "mtime": 1608161348.6320744, "nlink": 1, "path": "/etc/multipath.conf", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 6556, "uid": 0, "version": "2851889706", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}}
ok: [host3.fqdn.tld] => {"changed": false, "stat": {"atime": 1608245957.5354722, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 16, "charset": "us-ascii", "checksum": "da2254ee7938e2ca05dc3eb865fcc3ce061dbf69", "ctime": 1608245950.8239136, "dev": 64777, "device_type": 0, "executable": false, "exists": true, "gid": 0, "gr_name": "root", "inode": 33554645, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "text/plain", "mode": "0644", "mtime": 1608161348.8334486, "nlink": 1, "path": "/etc/multipath.conf", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 6556, "uid": 0, "version": "2140242966", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}}
TASK [gluster.infra/roles/backend_setup : Ensure that multipathd services is enabled if not] ***
task path: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/blacklist_mpath_devices.yml:8
skipping: [host1.fqdn.tld] => {"changed": false, "skip_reason": "Conditional result was False"}
skipping: [host2.fqdn.tld] => {"changed": false, "skip_reason": "Conditional result was False"}
skipping: [host3.fqdn.tld] => {"changed": false, "skip_reason": "Conditional result was False"}
TASK [gluster.infra/roles/backend_setup : Ensure that multipathd services is running] ***
task path: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/blacklist_mpath_devices.yml:12
ok: [host2.fqdn.tld] => {"changed": false, "enabled": true, "name": "multipathd", "state": "started", "status": {"ActiveEnterTimestamp": "Thu 2020-12-17 23:08:41 UTC", "ActiveEnterTimestampMonotonic": "224522056", "ActiveExitTimestamp": "Thu 2020-12-17 23:08:35 UTC", "ActiveExitTimestampMonotonic": "218954209", "ActiveState": "active", "After": "system.slice systemd-journald.socket multipathd.socket systemd-udev-settle.service systemd-udev-trigger.service", "AllowIsolate": "no", "AllowedCPUs": "", "AllowedMemoryNodes": "", "AmbientCapabilities": "", "AssertResult": "yes", "AssertTimestamp": "Thu 2020-12-17 23:08:41 UTC", "AssertTimestampMonotonic": "224352116", "Before": "iscsi.service iscsid.service local-fs-pre.target lvm2-activation-early.service blk-availability.service", "BlockIOAccounting": "no", "BlockIOWeight": "[not set]", "CPUAccounting": "no", "CPUAffinity": "", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetO
nFork": "no", "CPUShares": "[not set]", "CPUUsageNSec": "[not set]", "CPUWeight": "[not set]", "CacheDirectoryMode": "0755", "CanIsolate": "no", "CanReload": "yes", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend", "CollectMode": "inactive", "ConditionResult": "yes", "ConditionTimestamp": "Thu 2020-12-17 23:08:41 UTC", "ConditionTimestampMonotonic": "224351991", "ConfigurationDirectoryMode": "0755", "Conflicts": "shutdown.target", "ControlGroup": "/sy
stem.slice/multipathd.service", "ControlPID": "0", "DefaultDependencies": "no", "Delegate": "no", "Description": "Device-Mapper Multipath Device Controller", "DevicePolicy": "auto", "DynamicUser": "no", "EffectiveCPUs": "", "EffectiveMemoryNodes": "", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "2611", "ExecMainStartTimestamp": "Thu 2020-12-17 23:08:41 UTC", "ExecMainStartTimestampMonotonic": "224366679", "ExecMainStatus": "0", "ExecReload": "{ path=/sbin/multipathd ; argv[]=/sbin/multipathd reconfigure ; ignore_errors=no ; start_time=[Thu 2020-12-17 23:18:18 UTC] ; stop_time=[Thu 2020-12-17 23:18:18 UTC] ; pid=99612 ; code=exited ; status=0 }", "ExecStart": "{ path=/sbin/multipathd ; argv[]=/sbin/multipathd -d -s ; ignore_errors=no ; start_time=[Thu 2020-12-17 23:08:41 UTC] ; stop_time=[n/a] ; pid=2611 ; code=(null) ; status=0/0 }", "ExecStartPre": "{ path=/sbin/multipath ; argv[]=/sbin/multipath -A ; ignore_errors=yes ; start_time=[Thu 2020-12-17 23:
08:41 UTC] ; stop_time=[Thu 2020-12-17 23:08:41 UTC] ; pid=2607 ; code=exited ; status=0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/usr/lib/systemd/system/multipathd.service", "GID": "[not set]", "GuessMainPID": "yes", "IOAccounting": "no", "IOSchedulingClass": "0", "IOSchedulingPriority": "0", "IOWeight": "[not set]", "IPAccounting": "no", "IPEgressBytes": "18446744073709551615", "IPEgressPackets": "18446744073709551615", "IPIngressBytes": "18446744073709551615", "IPIngressPackets": "18446744073709551615", "Id": "multipathd.service", "IgnoreOnIsolate": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestamp": "Thu 2020-12-17 23:08:35 UTC", "InactiveEnterTimestampMonotonic": "218969672", "InactiveExitTimestamp": "Thu 2020-12-17 23:08:41 UTC", "InactiveExitTimestampMonotonic": "224353642", "InvocationID": "ec1bf4d1186e469c8457195c067736ec", "JobRunningTimeoutUSec": "infinity", "JobTimeoutAction": "none", "JobTimeoutUSec": "infinity", "KeyringMode": "p
rivate", "KillMode": "control-group", "KillSignal": "15", "LimitAS": "infinity", "LimitASSoft": "infinity", "LimitCORE": "infinity", "LimitCORESoft": "infinity", "LimitCPU": "infinity", "LimitCPUSoft": "infinity", "LimitDATA": "infinity", "LimitDATASoft": "infinity", "LimitFSIZE": "infinity", "LimitFSIZESoft": "infinity", "LimitLOCKS": "infinity", "LimitLOCKSSoft": "infinity", "LimitMEMLOCK": "65536", "LimitMEMLOCKSoft": "65536", "LimitMSGQUEUE": "819200", "LimitMSGQUEUESoft": "819200", "LimitNICE": "0", "LimitNICESoft": "0", "LimitNOFILE": "262144", "LimitNOFILESoft": "1024", "LimitNPROC": "1540534", "LimitNPROCSoft": "1540534", "LimitRSS": "infinity", "LimitRSSSoft": "infinity", "LimitRTPRIO": "0", "LimitRTPRIOSoft": "0", "LimitRTTIME": "infinity", "LimitRTTIMESoft": "infinity", "LimitSIGPENDING": "1540534", "LimitSIGPENDINGSoft": "1540534", "LimitSTACK": "infinity", "LimitSTACKSoft": "8388608", "LoadState": "loaded", "LockPersonality": "no", "LogLevelMax": "-1", "LogRateLimitBurs
t": "0", "LogRateLimitIntervalUSec": "0", "LogsDirectoryMode": "0755", "MainPID": "2611", "MemoryAccounting": "yes", "MemoryCurrent": "13635584", "MemoryDenyWriteExecute": "no", "MemoryHigh": "infinity", "MemoryLimit": "infinity", "MemoryLow": "0", "MemoryMax": "infinity", "MemorySwapMax": "infinity", "MountAPIVFS": "no", "MountFlags": "", "NFileDescriptorStore": "0", "NRestarts": "0", "NUMAMask": "", "NUMAPolicy": "n/a", "Names": "multipathd.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "main", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "Perpetual": "no", "PrivateDevices": "no", "PrivateMounts": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "PrivateUsers": "no", "ProtectControlGroups": "no", "ProtectHome": "no", "ProtectKernelModules": "no", "ProtectKernelTunables": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "R
emoveIPC": "no", "Requires": "system.slice", "Restart": "no", "RestartUSec": "100ms", "RestrictNamespaces": "no", "RestrictRealtime": "no", "RestrictSUIDSGID": "no", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "RuntimeDirectoryPreserve": "no", "RuntimeMaxUSec": "infinity", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardInputData": "", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitIntervalUSec": "10s", "StartupBlockIOWeight": "[not set]", "StartupCPUShares": "[not set]", "StartupCPUWeight": "[not set]", "StartupIOWeight": "[not set]", "StateChangeTimestamp": "Thu 2020-12-17 23:18:18 UTC", "StateChangeTimestampMonotonic": "801939841", "StateDirectoryMode": "0755", "StatusErrno": "0", "StatusText": "up", "StopWhenUnneeded": "no", "SubState": "running", "SuccessAction": "none"
, "SyslogFacility": "3", "SyslogLevel": "6", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "yes", "TasksCurrent": "7", "TasksMax": "infinity", "TimeoutStartUSec": "1min 30s", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "TriggeredBy": "multipathd.socket", "Type": "notify", "UID": "[not set]", "UMask": "0022", "UnitFilePreset": "enabled", "UnitFileState": "enabled", "UtmpMode": "init", "WantedBy": "sysinit.target", "Wants": "systemd-udev-settle.service systemd-udev-trigger.service", "WatchdogTimestamp": "Thu 2020-12-17 23:08:41 UTC", "WatchdogTimestampMonotonic": "224522054", "WatchdogUSec": "0"}}
ok: [host1.fqdn.tld] => {"changed": false, "enabled": true, "name": "multipathd", "state": "started", "status": {"ActiveEnterTimestamp": "Thu 2020-12-17 23:08:13 UTC", "ActiveEnterTimestampMonotonic": "192031125", "ActiveExitTimestamp": "Thu 2020-12-17 23:08:07 UTC", "ActiveExitTimestampMonotonic": "186554488", "ActiveState": "active", "After": "multipathd.socket systemd-udev-settle.service system.slice systemd-udev-trigger.service systemd-journald.socket", "AllowIsolate": "no", "AllowedCPUs": "", "AllowedMemoryNodes": "", "AmbientCapabilities": "", "AssertResult": "yes", "AssertTimestamp": "Thu 2020-12-17 23:08:12 UTC", "AssertTimestampMonotonic": "191863959", "Before": "blk-availability.service iscsid.service lvm2-activation-early.service iscsi.service local-fs-pre.target", "BlockIOAccounting": "no", "BlockIOWeight": "[not set]", "CPUAccounting": "no", "CPUAffinity": "", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetO
nFork": "no", "CPUShares": "[not set]", "CPUUsageNSec": "[not set]", "CPUWeight": "[not set]", "CacheDirectoryMode": "0755", "CanIsolate": "no", "CanReload": "yes", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend", "CollectMode": "inactive", "ConditionResult": "yes", "ConditionTimestamp": "Thu 2020-12-17 23:08:12 UTC", "ConditionTimestampMonotonic": "191863909", "ConfigurationDirectoryMode": "0755", "Conflicts": "shutdown.target", "ControlGroup": "/sy
stem.slice/multipathd.service", "ControlPID": "0", "DefaultDependencies": "no", "Delegate": "no", "Description": "Device-Mapper Multipath Device Controller", "DevicePolicy": "auto", "DynamicUser": "no", "EffectiveCPUs": "", "EffectiveMemoryNodes": "", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "2621", "ExecMainStartTimestamp": "Thu 2020-12-17 23:08:12 UTC", "ExecMainStartTimestampMonotonic": "191876665", "ExecMainStatus": "0", "ExecReload": "{ path=/sbin/multipathd ; argv[]=/sbin/multipathd reconfigure ; ignore_errors=no ; start_time=[Thu 2020-12-17 23:18:18 UTC] ; stop_time=[Thu 2020-12-17 23:18:18 UTC] ; pid=110075 ; code=exited ; status=0 }", "ExecStart": "{ path=/sbin/multipathd ; argv[]=/sbin/multipathd -d -s ; ignore_errors=no ; start_time=[Thu 2020-12-17 23:08:12 UTC] ; stop_time=[n/a] ; pid=2621 ; code=(null) ; status=0/0 }", "ExecStartPre": "{ path=/sbin/multipath ; argv[]=/sbin/multipath -A ; ignore_errors=yes ; start_time=[Thu 2020-12-17 23
:08:12 UTC] ; stop_time=[Thu 2020-12-17 23:08:12 UTC] ; pid=2614 ; code=exited ; status=0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/usr/lib/systemd/system/multipathd.service", "GID": "[not set]", "GuessMainPID": "yes", "IOAccounting": "no", "IOSchedulingClass": "0", "IOSchedulingPriority": "0", "IOWeight": "[not set]", "IPAccounting": "no", "IPEgressBytes": "18446744073709551615", "IPEgressPackets": "18446744073709551615", "IPIngressBytes": "18446744073709551615", "IPIngressPackets": "18446744073709551615", "Id": "multipathd.service", "IgnoreOnIsolate": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestamp": "Thu 2020-12-17 23:08:07 UTC", "InactiveEnterTimestampMonotonic": "186569608", "InactiveExitTimestamp": "Thu 2020-12-17 23:08:12 UTC", "InactiveExitTimestampMonotonic": "191865759", "InvocationID": "c0b3328b6fbf4914a850aad27e5b6124", "JobRunningTimeoutUSec": "infinity", "JobTimeoutAction": "none", "JobTimeoutUSec": "infinity", "KeyringMode": "
private", "KillMode": "control-group", "KillSignal": "15", "LimitAS": "infinity", "LimitASSoft": "infinity", "LimitCORE": "infinity", "LimitCORESoft": "infinity", "LimitCPU": "infinity", "LimitCPUSoft": "infinity", "LimitDATA": "infinity", "LimitDATASoft": "infinity", "LimitFSIZE": "infinity", "LimitFSIZESoft": "infinity", "LimitLOCKS": "infinity", "LimitLOCKSSoft": "infinity", "LimitMEMLOCK": "65536", "LimitMEMLOCKSoft": "65536", "LimitMSGQUEUE": "819200", "LimitMSGQUEUESoft": "819200", "LimitNICE": "0", "LimitNICESoft": "0", "LimitNOFILE": "262144", "LimitNOFILESoft": "1024", "LimitNPROC": "1540534", "LimitNPROCSoft": "1540534", "LimitRSS": "infinity", "LimitRSSSoft": "infinity", "LimitRTPRIO": "0", "LimitRTPRIOSoft": "0", "LimitRTTIME": "infinity", "LimitRTTIMESoft": "infinity", "LimitSIGPENDING": "1540534", "LimitSIGPENDINGSoft": "1540534", "LimitSTACK": "infinity", "LimitSTACKSoft": "8388608", "LoadState": "loaded", "LockPersonality": "no", "LogLevelMax": "-1", "LogRateLimitBur
st": "0", "LogRateLimitIntervalUSec": "0", "LogsDirectoryMode": "0755", "MainPID": "2621", "MemoryAccounting": "yes", "MemoryCurrent": "13611008", "MemoryDenyWriteExecute": "no", "MemoryHigh": "infinity", "MemoryLimit": "infinity", "MemoryLow": "0", "MemoryMax": "infinity", "MemorySwapMax": "infinity", "MountAPIVFS": "no", "MountFlags": "", "NFileDescriptorStore": "0", "NRestarts": "0", "NUMAMask": "", "NUMAPolicy": "n/a", "Names": "multipathd.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "main", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "Perpetual": "no", "PrivateDevices": "no", "PrivateMounts": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "PrivateUsers": "no", "ProtectControlGroups": "no", "ProtectHome": "no", "ProtectKernelModules": "no", "ProtectKernelTunables": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "
RemoveIPC": "no", "Requires": "system.slice", "Restart": "no", "RestartUSec": "100ms", "RestrictNamespaces": "no", "RestrictRealtime": "no", "RestrictSUIDSGID": "no", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "RuntimeDirectoryPreserve": "no", "RuntimeMaxUSec": "infinity", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardInputData": "", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitIntervalUSec": "10s", "StartupBlockIOWeight": "[not set]", "StartupCPUShares": "[not set]", "StartupCPUWeight": "[not set]", "StartupIOWeight": "[not set]", "StateChangeTimestamp": "Thu 2020-12-17 23:18:18 UTC", "StateChangeTimestampMonotonic": "797914715", "StateDirectoryMode": "0755", "StatusErrno": "0", "StatusText": "up", "StopWhenUnneeded": "no", "SubState": "running", "SuccessAction": "none
", "SyslogFacility": "3", "SyslogLevel": "6", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "yes", "TasksCurrent": "7", "TasksMax": "infinity", "TimeoutStartUSec": "1min 30s", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "TriggeredBy": "multipathd.socket", "Type": "notify", "UID": "[not set]", "UMask": "0022", "UnitFilePreset": "enabled", "UnitFileState": "enabled", "UtmpMode": "init", "WantedBy": "sysinit.target", "Wants": "systemd-udev-settle.service systemd-udev-trigger.service", "WatchdogTimestamp": "Thu 2020-12-17 23:08:13 UTC", "WatchdogTimestampMonotonic": "192031123", "WatchdogUSec": "0"}}
ok: [host3.fqdn.tld] => {"changed": false, "enabled": true, "name": "multipathd", "state": "started", "status": {"ActiveEnterTimestamp": "Thu 2020-12-17 23:09:23 UTC", "ActiveEnterTimestampMonotonic": "261349382", "ActiveExitTimestamp": "Thu 2020-12-17 23:09:17 UTC", "ActiveExitTimestampMonotonic": "255817465", "ActiveState": "active", "After": "multipathd.socket systemd-journald.socket systemd-udev-settle.service systemd-udev-trigger.service system.slice", "AllowIsolate": "no", "AllowedCPUs": "", "AllowedMemoryNodes": "", "AmbientCapabilities": "", "AssertResult": "yes", "AssertTimestamp": "Thu 2020-12-17 23:09:23 UTC", "AssertTimestampMonotonic": "261169612", "Before": "local-fs-pre.target iscsi.service lvm2-activation-early.service iscsid.service blk-availability.service", "BlockIOAccounting": "no", "BlockIOWeight": "[not set]", "CPUAccounting": "no", "CPUAffinity": "", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetO
nFork": "no", "CPUShares": "[not set]", "CPUUsageNSec": "[not set]", "CPUWeight": "[not set]", "CacheDirectoryMode": "0755", "CanIsolate": "no", "CanReload": "yes", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend", "CollectMode": "inactive", "ConditionResult": "yes", "ConditionTimestamp": "Thu 2020-12-17 23:09:23 UTC", "ConditionTimestampMonotonic": "261169576", "ConfigurationDirectoryMode": "0755", "Conflicts": "shutdown.target", "ControlGroup": "/sy
stem.slice/multipathd.service", "ControlPID": "0", "DefaultDependencies": "no", "Delegate": "no", "Description": "Device-Mapper Multipath Device Controller", "DevicePolicy": "auto", "DynamicUser": "no", "EffectiveCPUs": "", "EffectiveMemoryNodes": "", "ExecMainCode": "0", "ExecMainExitTimestampMonotonic": "0", "ExecMainPID": "2620", "ExecMainStartTimestamp": "Thu 2020-12-17 23:09:23 UTC", "ExecMainStartTimestampMonotonic": "261180356", "ExecMainStatus": "0", "ExecReload": "{ path=/sbin/multipathd ; argv[]=/sbin/multipathd reconfigure ; ignore_errors=no ; start_time=[Thu 2020-12-17 23:18:18 UTC] ; stop_time=[Thu 2020-12-17 23:18:19 UTC] ; pid=99588 ; code=exited ; status=0 }", "ExecStart": "{ path=/sbin/multipathd ; argv[]=/sbin/multipathd -d -s ; ignore_errors=no ; start_time=[Thu 2020-12-17 23:09:23 UTC] ; stop_time=[n/a] ; pid=2620 ; code=(null) ; status=0/0 }", "ExecStartPre": "{ path=/sbin/multipath ; argv[]=/sbin/multipath -A ; ignore_errors=yes ; start_time=[Thu 2020-12-17 23:
09:23 UTC] ; stop_time=[Thu 2020-12-17 23:09:23 UTC] ; pid=2617 ; code=exited ; status=0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/usr/lib/systemd/system/multipathd.service", "GID": "[not set]", "GuessMainPID": "yes", "IOAccounting": "no", "IOSchedulingClass": "0", "IOSchedulingPriority": "0", "IOWeight": "[not set]", "IPAccounting": "no", "IPEgressBytes": "18446744073709551615", "IPEgressPackets": "18446744073709551615", "IPIngressBytes": "18446744073709551615", "IPIngressPackets": "18446744073709551615", "Id": "multipathd.service", "IgnoreOnIsolate": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestamp": "Thu 2020-12-17 23:09:17 UTC", "InactiveEnterTimestampMonotonic": "255837702", "InactiveExitTimestamp": "Thu 2020-12-17 23:09:23 UTC", "InactiveExitTimestampMonotonic": "261170667", "InvocationID": "a5ebbbdb67cf48bda5d1696432290776", "JobRunningTimeoutUSec": "infinity", "JobTimeoutAction": "none", "JobTimeoutUSec": "infinity", "KeyringMode": "p
rivate", "KillMode": "control-group", "KillSignal": "15", "LimitAS": "infinity", "LimitASSoft": "infinity", "LimitCORE": "infinity", "LimitCORESoft": "infinity", "LimitCPU": "infinity", "LimitCPUSoft": "infinity", "LimitDATA": "infinity", "LimitDATASoft": "infinity", "LimitFSIZE": "infinity", "LimitFSIZESoft": "infinity", "LimitLOCKS": "infinity", "LimitLOCKSSoft": "infinity", "LimitMEMLOCK": "65536", "LimitMEMLOCKSoft": "65536", "LimitMSGQUEUE": "819200", "LimitMSGQUEUESoft": "819200", "LimitNICE": "0", "LimitNICESoft": "0", "LimitNOFILE": "262144", "LimitNOFILESoft": "1024", "LimitNPROC": "1540534", "LimitNPROCSoft": "1540534", "LimitRSS": "infinity", "LimitRSSSoft": "infinity", "LimitRTPRIO": "0", "LimitRTPRIOSoft": "0", "LimitRTTIME": "infinity", "LimitRTTIMESoft": "infinity", "LimitSIGPENDING": "1540534", "LimitSIGPENDINGSoft": "1540534", "LimitSTACK": "infinity", "LimitSTACKSoft": "8388608", "LoadState": "loaded", "LockPersonality": "no", "LogLevelMax": "-1", "LogRateLimitBurs
t": "0", "LogRateLimitIntervalUSec": "0", "LogsDirectoryMode": "0755", "MainPID": "2620", "MemoryAccounting": "yes", "MemoryCurrent": "13955072", "MemoryDenyWriteExecute": "no", "MemoryHigh": "infinity", "MemoryLimit": "infinity", "MemoryLow": "0", "MemoryMax": "infinity", "MemorySwapMax": "infinity", "MountAPIVFS": "no", "MountFlags": "", "NFileDescriptorStore": "0", "NRestarts": "0", "NUMAMask": "", "NUMAPolicy": "n/a", "Names": "multipathd.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "main", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "Perpetual": "no", "PrivateDevices": "no", "PrivateMounts": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "PrivateUsers": "no", "ProtectControlGroups": "no", "ProtectHome": "no", "ProtectKernelModules": "no", "ProtectKernelTunables": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "no", "R
emoveIPC": "no", "Requires": "system.slice", "Restart": "no", "RestartUSec": "100ms", "RestrictNamespaces": "no", "RestrictRealtime": "no", "RestrictSUIDSGID": "no", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "RuntimeDirectoryPreserve": "no", "RuntimeMaxUSec": "infinity", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes", "Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardInputData": "", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitIntervalUSec": "10s", "StartupBlockIOWeight": "[not set]", "StartupCPUShares": "[not set]", "StartupCPUWeight": "[not set]", "StartupIOWeight": "[not set]", "StateChangeTimestamp": "Thu 2020-12-17 23:18:19 UTC", "StateChangeTimestampMonotonic": "796975135", "StateDirectoryMode": "0755", "StatusErrno": "0", "StatusText": "up", "StopWhenUnneeded": "no", "SubState": "running", "SuccessAction": "none"
, "SyslogFacility": "3", "SyslogLevel": "6", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "yes", "TasksCurrent": "7", "TasksMax": "infinity", "TimeoutStartUSec": "1min 30s", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "TriggeredBy": "multipathd.socket", "Type": "notify", "UID": "[not set]", "UMask": "0022", "UnitFilePreset": "enabled", "UnitFileState": "enabled", "UtmpMode": "init", "WantedBy": "sysinit.target", "Wants": "systemd-udev-trigger.service systemd-udev-settle.service", "WatchdogTimestamp": "Thu 2020-12-17 23:09:23 UTC", "WatchdogTimestampMonotonic": "261349380", "WatchdogUSec": "0"}}
TASK [gluster.infra/roles/backend_setup : Create /etc/multipath/conf.d if doesn't exists] ***
task path: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/blacklist_mpath_devices.yml:18
ok: [host2.fqdn.tld] => {"changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/etc/multipath/conf.d", "secontext": "system_u:object_r:lvm_metadata_t:s0", "size": 55, "state": "directory", "uid": 0}
ok: [host3.fqdn.tld] => {"changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/etc/multipath/conf.d", "secontext": "system_u:object_r:lvm_metadata_t:s0", "size": 55, "state": "directory", "uid": 0}
ok: [host1.fqdn.tld] => {"changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/etc/multipath/conf.d", "secontext": "system_u:object_r:lvm_metadata_t:s0", "size": 55, "state": "directory", "uid": 0}
TASK [gluster.infra/roles/backend_setup : Get the UUID of the devices] *********
task path: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/blacklist_mpath_devices.yml:24
failed: [host2.fqdn.tld] (item=nvme0n1) => {"ansible_loop_var": "item", "changed": true, "cmd": "multipath -a /dev/nvme0n", "delta": "0:00:00.009726", "end": "2020-12-17 23:25:52.641952", "item": "nvme0n1", "msg": "non-zero return code", "rc": 1, "start": "2020-12-17 23:25:52.632226", "stderr": "Dec 17 23:25:52 | '/dev/nvme0n' is not a valid argument", "stderr_lines": ["Dec 17 23:25:52 | '/dev/nvme0n' is not a valid argument"], "stdout": "", "stdout_lines": []}
failed: [host1.fqdn.tld] (item=nvme0n1) => {"ansible_loop_var": "item", "changed": true, "cmd": "multipath -a /dev/nvme0n", "delta": "0:00:00.009507", "end": "2020-12-17 23:25:52.679519", "item": "nvme0n1", "msg": "non-zero return code", "rc": 1, "start": "2020-12-17 23:25:52.670012", "stderr": "Dec 17 23:25:52 | '/dev/nvme0n' is not a valid argument", "stderr_lines": ["Dec 17 23:25:52 | '/dev/nvme0n' is not a valid argument"], "stdout": "", "stdout_lines": []}
failed: [host3.fqdn.tld] (item=nvme0n1) => {"ansible_loop_var": "item", "changed": true, "cmd": "multipath -a /dev/nvme0n", "delta": "0:00:00.009487", "end": "2020-12-17 23:25:52.711264", "item": "nvme0n1", "msg": "non-zero return code", "rc": 1, "start": "2020-12-17 23:25:52.701777", "stderr": "Dec 17 23:25:52 | '/dev/nvme0n' is not a valid argument", "stderr_lines": ["Dec 17 23:25:52 | '/dev/nvme0n' is not a valid argument"], "stdout": "", "stdout_lines": []}
failed: [host2.fqdn.tld] (item=nvme2n1) => {"ansible_loop_var": "item", "changed": true, "cmd": "multipath -a /dev/nvme2n", "delta": "0:00:00.009607", "end": "2020-12-17 23:25:57.610957", "item": "nvme2n1", "msg": "non-zero return code", "rc": 1, "start": "2020-12-17 23:25:57.601350", "stderr": "Dec 17 23:25:57 | '/dev/nvme2n' is not a valid argument", "stderr_lines": ["Dec 17 23:25:57 | '/dev/nvme2n' is not a valid argument"], "stdout": "", "stdout_lines": []}
failed: [host1.fqdn.tld] (item=nvme2n1) => {"ansible_loop_var": "item", "changed": true, "cmd": "multipath -a /dev/nvme2n", "delta": "0:00:00.009679", "end": "2020-12-17 23:25:57.690543", "item": "nvme2n1", "msg": "non-zero return code", "rc": 1, "start": "2020-12-17 23:25:57.680864", "stderr": "Dec 17 23:25:57 | '/dev/nvme2n' is not a valid argument", "stderr_lines": ["Dec 17 23:25:57 | '/dev/nvme2n' is not a valid argument"], "stdout": "", "stdout_lines": []}
failed: [host3.fqdn.tld] (item=nvme2n1) => {"ansible_loop_var": "item", "changed": true, "cmd": "multipath -a /dev/nvme2n", "delta": "0:00:00.009181", "end": "2020-12-17 23:25:57.689728", "item": "nvme2n1", "msg": "non-zero return code", "rc": 1, "start": "2020-12-17 23:25:57.680547", "stderr": "Dec 17 23:25:57 | '/dev/nvme2n' is not a valid argument", "stderr_lines": ["Dec 17 23:25:57 | '/dev/nvme2n' is not a valid argument"], "stdout": "", "stdout_lines": []}
failed: [host2.fqdn.tld] (item=nvme1n1) => {"ansible_loop_var": "item", "changed": true, "cmd": "multipath -a /dev/nvme1n", "delta": "0:00:00.009420", "end": "2020-12-17 23:26:02.560780", "item": "nvme1n1", "msg": "non-zero return code", "rc": 1, "start": "2020-12-17 23:26:02.551360", "stderr": "Dec 17 23:26:02 | '/dev/nvme1n' is not a valid argument", "stderr_lines": ["Dec 17 23:26:02 | '/dev/nvme1n' is not a valid argument"], "stdout": "", "stdout_lines": []}
...ignoring
failed: [host3.fqdn.tld] (item=nvme1n1) => {"ansible_loop_var": "item", "changed": true, "cmd": "multipath -a /dev/nvme1n", "delta": "0:00:00.009051", "end": "2020-12-17 23:26:02.687562", "item": "nvme1n1", "msg": "non-zero return code", "rc": 1, "start": "2020-12-17 23:26:02.678511", "stderr": "Dec 17 23:26:02 | '/dev/nvme1n' is not a valid argument", "stderr_lines": ["Dec 17 23:26:02 | '/dev/nvme1n' is not a valid argument"], "stdout": "", "stdout_lines": []}
...ignoring
failed: [host1.fqdn.tld] (item=nvme1n1) => {"ansible_loop_var": "item", "changed": true, "cmd": "multipath -a /dev/nvme1n", "delta": "0:00:00.009258", "end": "2020-12-17 23:26:02.702197", "item": "nvme1n1", "msg": "non-zero return code", "rc": 1, "start": "2020-12-17 23:26:02.692939", "stderr": "Dec 17 23:26:02 | '/dev/nvme1n' is not a valid argument", "stderr_lines": ["Dec 17 23:26:02 | '/dev/nvme1n' is not a valid argument"], "stdout": "", "stdout_lines": []}
...ignoring
TASK [gluster.infra/roles/backend_setup : Check that the blacklist.conf exists] ***
task path: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/blacklist_mpath_devices.yml:30
ok: [host2.fqdn.tld] => {"changed": false, "stat": {"atime": 1608245981.4759362, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 8, "charset": "us-ascii", "checksum": "2c1ec58c96d37eeb81e0378bd4ce8e2bec52e47b", "ctime": 1608245888.801204, "dev": 64777, "device_type": 0, "executable": false, "exists": true, "gid": 0, "gr_name": "root", "inode": 33618955, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "text/plain", "mode": "0644", "mtime": 1608245445.6802266, "nlink": 1, "path": "/etc/multipath/conf.d/blacklist.conf", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 72, "uid": 0, "version": "4189839679", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}}
ok: [host1.fqdn.tld] => {"changed": false, "stat": {"atime": 1608246023.2934792, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 8, "charset": "us-ascii", "checksum": "2c1ec58c96d37eeb81e0378bd4ce8e2bec52e47b", "ctime": 1608245959.0814755, "dev": 64777, "device_type": 0, "executable": false, "exists": true, "gid": 0, "gr_name": "root", "inode": 33628171, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "text/plain", "mode": "0644", "mtime": 1608245445.7213576, "nlink": 1, "path": "/etc/multipath/conf.d/blacklist.conf", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 72, "uid": 0, "version": "3487988862", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}}
ok: [host3.fqdn.tld] => {"changed": false, "stat": {"atime": 1608246044.783734, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 8, "charset": "us-ascii", "checksum": "2c1ec58c96d37eeb81e0378bd4ce8e2bec52e47b", "ctime": 1608245951.1839435, "dev": 64777, "device_type": 0, "executable": false, "exists": true, "gid": 0, "gr_name": "root", "inode": 33633867, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "text/plain", "mode": "0644", "mtime": 1608245445.836882, "nlink": 1, "path": "/etc/multipath/conf.d/blacklist.conf", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 72, "uid": 0, "version": "2464111218", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}}
TASK [gluster.infra/roles/backend_setup : Create blacklist template content] ***
task path: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/blacklist_mpath_devices.yml:35
skipping: [host1.fqdn.tld] => {"changed": false, "skip_reason": "Conditional result was False"}
skipping: [host2.fqdn.tld] => {"changed": false, "skip_reason": "Conditional result was False"}
skipping: [host3.fqdn.tld] => {"changed": false, "skip_reason": "Conditional result was False"}
TASK [gluster.infra/roles/backend_setup : Add wwid to blacklist in blacklist.conf file] ***
task path: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/blacklist_mpath_devices.yml:45
skipping: [host1.fqdn.tld] => (item={'msg': 'non-zero return code', 'cmd': 'multipath -a /dev/nvme0n', 'stdout': '', 'stderr': "Dec 17 23:25:52 | '/dev/nvme0n' is not a valid argument", 'rc': 1, 'start': '2020-12-17 23:25:52.670012', 'end': '2020-12-17 23:25:52.679519', 'delta': '0:00:00.009507', 'changed': True, 'failed': True, 'invocation': {'module_args': {'_raw_params': 'multipath -a /dev/nvme0n', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["Dec 17 23:25:52 | '/dev/nvme0n' is not a valid argument"], 'item': 'nvme0n1', 'ansible_loop_var': 'item'}) => {"ansible_loop_var": "item", "changed": false, "item": {"ansible_loop_var": "item", "changed": true, "cmd": "multipath -a /dev/nvme0n", "delta": "0:00:00.009507", "end": "2020-12-17 23:25:52.679519", "failed": true, "invocation": {"module_args": {"_raw_par
ams": "multipath -a /dev/nvme0n", "_uses_shell": true, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "stdin_add_newline": true, "strip_empty_ends": true, "warn": true}}, "item": "nvme0n1", "msg": "non-zero return code", "rc": 1, "start": "2020-12-17 23:25:52.670012", "stderr": "Dec 17 23:25:52 | '/dev/nvme0n' is not a valid argument", "stderr_lines": ["Dec 17 23:25:52 | '/dev/nvme0n' is not a valid argument"], "stdout": "", "stdout_lines": []}, "skip_reason": "Conditional result was False"}
skipping: [host1.fqdn.tld] => (item={'msg': 'non-zero return code', 'cmd': 'multipath -a /dev/nvme2n', 'stdout': '', 'stderr': "Dec 17 23:25:57 | '/dev/nvme2n' is not a valid argument", 'rc': 1, 'start': '2020-12-17 23:25:57.680864', 'end': '2020-12-17 23:25:57.690543', 'delta': '0:00:00.009679', 'changed': True, 'failed': True, 'invocation': {'module_args': {'_raw_params': 'multipath -a /dev/nvme2n', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["Dec 17 23:25:57 | '/dev/nvme2n' is not a valid argument"], 'item': 'nvme2n1', 'ansible_loop_var': 'item'}) => {"ansible_loop_var": "item", "changed": false, "item": {"ansible_loop_var": "item", "changed": true, "cmd": "multipath -a /dev/nvme2n", "delta": "0:00:00.009679", "end": "2020-12-17 23:25:57.690543", "failed": true, "invocation": {"module_args": {"_raw_par
ams": "multipath -a /dev/nvme2n", "_uses_shell": true, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "stdin_add_newline": true, "strip_empty_ends": true, "warn": true}}, "item": "nvme2n1", "msg": "non-zero return code", "rc": 1, "start": "2020-12-17 23:25:57.680864", "stderr": "Dec 17 23:25:57 | '/dev/nvme2n' is not a valid argument", "stderr_lines": ["Dec 17 23:25:57 | '/dev/nvme2n' is not a valid argument"], "stdout": "", "stdout_lines": []}, "skip_reason": "Conditional result was False"}
skipping: [host1.fqdn.tld] => (item={'msg': 'non-zero return code', 'cmd': 'multipath -a /dev/nvme1n', 'stdout': '', 'stderr': "Dec 17 23:26:02 | '/dev/nvme1n' is not a valid argument", 'rc': 1, 'start': '2020-12-17 23:26:02.692939', 'end': '2020-12-17 23:26:02.702197', 'delta': '0:00:00.009258', 'changed': True, 'failed': True, 'invocation': {'module_args': {'_raw_params': 'multipath -a /dev/nvme1n', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["Dec 17 23:26:02 | '/dev/nvme1n' is not a valid argument"], 'item': 'nvme1n1', 'ansible_loop_var': 'item'}) => {"ansible_loop_var": "item", "changed": false, "item": {"ansible_loop_var": "item", "changed": true, "cmd": "multipath -a /dev/nvme1n", "delta": "0:00:00.009258", "end": "2020-12-17 23:26:02.702197", "failed": true, "invocation": {"module_args": {"_raw_par
ams": "multipath -a /dev/nvme1n", "_uses_shell": true, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "stdin_add_newline": true, "strip_empty_ends": true, "warn": true}}, "item": "nvme1n1", "msg": "non-zero return code", "rc": 1, "start": "2020-12-17 23:26:02.692939", "stderr": "Dec 17 23:26:02 | '/dev/nvme1n' is not a valid argument", "stderr_lines": ["Dec 17 23:26:02 | '/dev/nvme1n' is not a valid argument"], "stdout": "", "stdout_lines": []}, "skip_reason": "Conditional result was False"}
skipping: [host2.fqdn.tld] => (item={'msg': 'non-zero return code', 'cmd': 'multipath -a /dev/nvme0n', 'stdout': '', 'stderr': "Dec 17 23:25:52 | '/dev/nvme0n' is not a valid argument", 'rc': 1, 'start': '2020-12-17 23:25:52.632226', 'end': '2020-12-17 23:25:52.641952', 'delta': '0:00:00.009726', 'changed': True, 'failed': True, 'invocation': {'module_args': {'_raw_params': 'multipath -a /dev/nvme0n', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["Dec 17 23:25:52 | '/dev/nvme0n' is not a valid argument"], 'item': 'nvme0n1', 'ansible_loop_var': 'item'}) => {"ansible_loop_var": "item", "changed": false, "item": {"ansible_loop_var": "item", "changed": true, "cmd": "multipath -a /dev/nvme0n", "delta": "0:00:00.009726", "end": "2020-12-17 23:25:52.641952", "failed": true, "invocation": {"module_args": {"_raw_par
ams": "multipath -a /dev/nvme0n", "_uses_shell": true, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "stdin_add_newline": true, "strip_empty_ends": true, "warn": true}}, "item": "nvme0n1", "msg": "non-zero return code", "rc": 1, "start": "2020-12-17 23:25:52.632226", "stderr": "Dec 17 23:25:52 | '/dev/nvme0n' is not a valid argument", "stderr_lines": ["Dec 17 23:25:52 | '/dev/nvme0n' is not a valid argument"], "stdout": "", "stdout_lines": []}, "skip_reason": "Conditional result was False"}
skipping: [host2.fqdn.tld] => (item={'msg': 'non-zero return code', 'cmd': 'multipath -a /dev/nvme2n', 'stdout': '', 'stderr': "Dec 17 23:25:57 | '/dev/nvme2n' is not a valid argument", 'rc': 1, 'start': '2020-12-17 23:25:57.601350', 'end': '2020-12-17 23:25:57.610957', 'delta': '0:00:00.009607', 'changed': True, 'failed': True, 'invocation': {'module_args': {'_raw_params': 'multipath -a /dev/nvme2n', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["Dec 17 23:25:57 | '/dev/nvme2n' is not a valid argument"], 'item': 'nvme2n1', 'ansible_loop_var': 'item'}) => {"ansible_loop_var": "item", "changed": false, "item": {"ansible_loop_var": "item", "changed": true, "cmd": "multipath -a /dev/nvme2n", "delta": "0:00:00.009607", "end": "2020-12-17 23:25:57.610957", "failed": true, "invocation": {"module_args": {"_raw_par
ams": "multipath -a /dev/nvme2n", "_uses_shell": true, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "stdin_add_newline": true, "strip_empty_ends": true, "warn": true}}, "item": "nvme2n1", "msg": "non-zero return code", "rc": 1, "start": "2020-12-17 23:25:57.601350", "stderr": "Dec 17 23:25:57 | '/dev/nvme2n' is not a valid argument", "stderr_lines": ["Dec 17 23:25:57 | '/dev/nvme2n' is not a valid argument"], "stdout": "", "stdout_lines": []}, "skip_reason": "Conditional result was False"}
skipping: [host2.fqdn.tld] => (item={'msg': 'non-zero return code', 'cmd': 'multipath -a /dev/nvme1n', 'stdout': '', 'stderr': "Dec 17 23:26:02 | '/dev/nvme1n' is not a valid argument", 'rc': 1, 'start': '2020-12-17 23:26:02.551360', 'end': '2020-12-17 23:26:02.560780', 'delta': '0:00:00.009420', 'changed': True, 'failed': True, 'invocation': {'module_args': {'_raw_params': 'multipath -a /dev/nvme1n', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["Dec 17 23:26:02 | '/dev/nvme1n' is not a valid argument"], 'item': 'nvme1n1', 'ansible_loop_var': 'item'}) => {"ansible_loop_var": "item", "changed": false, "item": {"ansible_loop_var": "item", "changed": true, "cmd": "multipath -a /dev/nvme1n", "delta": "0:00:00.009420", "end": "2020-12-17 23:26:02.560780", "failed": true, "invocation": {"module_args": {"_raw_par
ams": "multipath -a /dev/nvme1n", "_uses_shell": true, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "stdin_add_newline": true, "strip_empty_ends": true, "warn": true}}, "item": "nvme1n1", "msg": "non-zero return code", "rc": 1, "start": "2020-12-17 23:26:02.551360", "stderr": "Dec 17 23:26:02 | '/dev/nvme1n' is not a valid argument", "stderr_lines": ["Dec 17 23:26:02 | '/dev/nvme1n' is not a valid argument"], "stdout": "", "stdout_lines": []}, "skip_reason": "Conditional result was False"}
skipping: [host3.fqdn.tld] => (item={'msg': 'non-zero return code', 'cmd': 'multipath -a /dev/nvme0n', 'stdout': '', 'stderr': "Dec 17 23:25:52 | '/dev/nvme0n' is not a valid argument", 'rc': 1, 'start': '2020-12-17 23:25:52.701777', 'end': '2020-12-17 23:25:52.711264', 'delta': '0:00:00.009487', 'changed': True, 'failed': True, 'invocation': {'module_args': {'_raw_params': 'multipath -a /dev/nvme0n', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["Dec 17 23:25:52 | '/dev/nvme0n' is not a valid argument"], 'item': 'nvme0n1', 'ansible_loop_var': 'item'}) => {"ansible_loop_var": "item", "changed": false, "item": {"ansible_loop_var": "item", "changed": true, "cmd": "multipath -a /dev/nvme0n", "delta": "0:00:00.009487", "end": "2020-12-17 23:25:52.711264", "failed": true, "invocation": {"module_args": {"_raw_par
ams": "multipath -a /dev/nvme0n", "_uses_shell": true, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "stdin_add_newline": true, "strip_empty_ends": true, "warn": true}}, "item": "nvme0n1", "msg": "non-zero return code", "rc": 1, "start": "2020-12-17 23:25:52.701777", "stderr": "Dec 17 23:25:52 | '/dev/nvme0n' is not a valid argument", "stderr_lines": ["Dec 17 23:25:52 | '/dev/nvme0n' is not a valid argument"], "stdout": "", "stdout_lines": []}, "skip_reason": "Conditional result was False"}
skipping: [host3.fqdn.tld] => (item={'msg': 'non-zero return code', 'cmd': 'multipath -a /dev/nvme2n', 'stdout': '', 'stderr': "Dec 17 23:25:57 | '/dev/nvme2n' is not a valid argument", 'rc': 1, 'start': '2020-12-17 23:25:57.680547', 'end': '2020-12-17 23:25:57.689728', 'delta': '0:00:00.009181', 'changed': True, 'failed': True, 'invocation': {'module_args': {'_raw_params': 'multipath -a /dev/nvme2n', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["Dec 17 23:25:57 | '/dev/nvme2n' is not a valid argument"], 'item': 'nvme2n1', 'ansible_loop_var': 'item'}) => {"ansible_loop_var": "item", "changed": false, "item": {"ansible_loop_var": "item", "changed": true, "cmd": "multipath -a /dev/nvme2n", "delta": "0:00:00.009181", "end": "2020-12-17 23:25:57.689728", "failed": true, "invocation": {"module_args": {"_raw_par
ams": "multipath -a /dev/nvme2n", "_uses_shell": true, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "stdin_add_newline": true, "strip_empty_ends": true, "warn": true}}, "item": "nvme2n1", "msg": "non-zero return code", "rc": 1, "start": "2020-12-17 23:25:57.680547", "stderr": "Dec 17 23:25:57 | '/dev/nvme2n' is not a valid argument", "stderr_lines": ["Dec 17 23:25:57 | '/dev/nvme2n' is not a valid argument"], "stdout": "", "stdout_lines": []}, "skip_reason": "Conditional result was False"}
skipping: [host3.fqdn.tld] => (item={'msg': 'non-zero return code', 'cmd': 'multipath -a /dev/nvme1n', 'stdout': '', 'stderr': "Dec 17 23:26:02 | '/dev/nvme1n' is not a valid argument", 'rc': 1, 'start': '2020-12-17 23:26:02.678511', 'end': '2020-12-17 23:26:02.687562', 'delta': '0:00:00.009051', 'changed': True, 'failed': True, 'invocation': {'module_args': {'_raw_params': 'multipath -a /dev/nvme1n', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["Dec 17 23:26:02 | '/dev/nvme1n' is not a valid argument"], 'item': 'nvme1n1', 'ansible_loop_var': 'item'}) => {"ansible_loop_var": "item", "changed": false, "item": {"ansible_loop_var": "item", "changed": true, "cmd": "multipath -a /dev/nvme1n", "delta": "0:00:00.009051", "end": "2020-12-17 23:26:02.687562", "failed": true, "invocation": {"module_args": {"_raw_par
ams": "multipath -a /dev/nvme1n", "_uses_shell": true, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "stdin_add_newline": true, "strip_empty_ends": true, "warn": true}}, "item": "nvme1n1", "msg": "non-zero return code", "rc": 1, "start": "2020-12-17 23:26:02.678511", "stderr": "Dec 17 23:26:02 | '/dev/nvme1n' is not a valid argument", "stderr_lines": ["Dec 17 23:26:02 | '/dev/nvme1n' is not a valid argument"], "stdout": "", "stdout_lines": []}, "skip_reason": "Conditional result was False"}
TASK [gluster.infra/roles/backend_setup : Reload multipathd] *******************
task path: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/blacklist_mpath_devices.yml:55
changed: [host2.fqdn.tld] => {"changed": true, "cmd": "systemctl reload multipathd", "delta": "0:00:00.028351", "end": "2020-12-17 23:26:13.119016", "rc": 0, "start": "2020-12-17 23:26:13.090665", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
changed: [host1.fqdn.tld] => {"changed": true, "cmd": "systemctl reload multipathd", "delta": "0:00:00.030193", "end": "2020-12-17 23:26:13.161971", "rc": 0, "start": "2020-12-17 23:26:13.131778", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
changed: [host3.fqdn.tld] => {"changed": true, "cmd": "systemctl reload multipathd", "delta": "0:00:00.029479", "end": "2020-12-17 23:26:13.225034", "rc": 0, "start": "2020-12-17 23:26:13.195555", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
TASK [gluster.infra/roles/backend_setup : Gather facts to determine the OS distribution] ***
task path: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/main.yml:29
ok: [host2.fqdn.tld]
ok: [host1.fqdn.tld]
ok: [host3.fqdn.tld]
TASK [gluster.infra/roles/backend_setup : Change to Install lvm tools for debian systems.] ***
task path: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/main.yml:37
skipping: [host1.fqdn.tld] => {"changed": false, "skip_reason": "Conditional result was False"}
skipping: [host2.fqdn.tld] => {"changed": false, "skip_reason": "Conditional result was False"}
skipping: [host3.fqdn.tld] => {"changed": false, "skip_reason": "Conditional result was False"}
TASK [gluster.infra/roles/backend_setup : Change to Install lvm tools for RHEL systems.] ***
task path: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/main.yml:43
ok: [host2.fqdn.tld] => {"changed": false, "msg": "Nothing to do", "rc": 0, "results": []}
ok: [host3.fqdn.tld] => {"changed": false, "msg": "Nothing to do", "rc": 0, "results": []}
ok: [host1.fqdn.tld] => {"changed": false, "msg": "Nothing to do", "rc": 0, "results": []}
TASK [gluster.infra/roles/backend_setup : Install python-yaml package for Debian systems] ***
task path: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/main.yml:49
skipping: [host1.fqdn.tld] => {"changed": false, "skip_reason": "Conditional result was False"}
skipping: [host2.fqdn.tld] => {"changed": false, "skip_reason": "Conditional result was False"}
skipping: [host3.fqdn.tld] => {"changed": false, "skip_reason": "Conditional result was False"}
TASK [gluster.infra/roles/backend_setup : Initialize vdo_devs array] ***********
task path: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/main.yml:57
ok: [host1.fqdn.tld] => {"ansible_facts": {"vdo_devs": []}, "changed": false}
ok: [host2.fqdn.tld] => {"ansible_facts": {"vdo_devs": []}, "changed": false}
ok: [host3.fqdn.tld] => {"ansible_facts": {"vdo_devs": []}, "changed": false}
TASK [gluster.infra/roles/backend_setup : Record VDO devices (if any)] *********
task path: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/main.yml:64
ok: [host1.fqdn.tld] => (item={'vgname': 'gluster_vg_nvme0n1', 'pvname': '/dev/mapper/vdo_nvme0n1'}) => {"ansible_facts": {"vdo_devs": ["gluster_vg_nvme0n1"]}, "ansible_loop_var": "item", "changed": false, "item": {"pvname": "/dev/mapper/vdo_nvme0n1", "vgname": "gluster_vg_nvme0n1"}}
ok: [host1.fqdn.tld] => (item={'vgname': 'gluster_vg_nvme2n1', 'pvname': '/dev/mapper/vdo_nvme2n1'}) => {"ansible_facts": {"vdo_devs": ["gluster_vg_nvme0n1", "gluster_vg_nvme2n1"]}, "ansible_loop_var": "item", "changed": false, "item": {"pvname": "/dev/mapper/vdo_nvme2n1", "vgname": "gluster_vg_nvme2n1"}}
ok: [host1.fqdn.tld] => (item={'vgname': 'gluster_vg_nvme1n1', 'pvname': '/dev/mapper/vdo_nvme1n1'}) => {"ansible_facts": {"vdo_devs": ["gluster_vg_nvme0n1", "gluster_vg_nvme2n1", "gluster_vg_nvme1n1"]}, "ansible_loop_var": "item", "changed": false, "item": {"pvname": "/dev/mapper/vdo_nvme1n1", "vgname": "gluster_vg_nvme1n1"}}
ok: [host2.fqdn.tld] => (item={'vgname': 'gluster_vg_nvme0n1', 'pvname': '/dev/mapper/vdo_nvme0n1'}) => {"ansible_facts": {"vdo_devs": ["gluster_vg_nvme0n1"]}, "ansible_loop_var": "item", "changed": false, "item": {"pvname": "/dev/mapper/vdo_nvme0n1", "vgname": "gluster_vg_nvme0n1"}}
ok: [host2.fqdn.tld] => (item={'vgname': 'gluster_vg_nvme2n1', 'pvname': '/dev/mapper/vdo_nvme2n1'}) => {"ansible_facts": {"vdo_devs": ["gluster_vg_nvme0n1", "gluster_vg_nvme2n1"]}, "ansible_loop_var": "item", "changed": false, "item": {"pvname": "/dev/mapper/vdo_nvme2n1", "vgname": "gluster_vg_nvme2n1"}}
ok: [host2.fqdn.tld] => (item={'vgname': 'gluster_vg_nvme1n1', 'pvname': '/dev/mapper/vdo_nvme1n1'}) => {"ansible_facts": {"vdo_devs": ["gluster_vg_nvme0n1", "gluster_vg_nvme2n1", "gluster_vg_nvme1n1"]}, "ansible_loop_var": "item", "changed": false, "item": {"pvname": "/dev/mapper/vdo_nvme1n1", "vgname": "gluster_vg_nvme1n1"}}
ok: [host3.fqdn.tld] => (item={'vgname': 'gluster_vg_nvme0n1', 'pvname': '/dev/mapper/vdo_nvme0n1'}) => {"ansible_facts": {"vdo_devs": ["gluster_vg_nvme0n1"]}, "ansible_loop_var": "item", "changed": false, "item": {"pvname": "/dev/mapper/vdo_nvme0n1", "vgname": "gluster_vg_nvme0n1"}}
ok: [host3.fqdn.tld] => (item={'vgname': 'gluster_vg_nvme2n1', 'pvname': '/dev/mapper/vdo_nvme2n1'}) => {"ansible_facts": {"vdo_devs": ["gluster_vg_nvme0n1", "gluster_vg_nvme2n1"]}, "ansible_loop_var": "item", "changed": false, "item": {"pvname": "/dev/mapper/vdo_nvme2n1", "vgname": "gluster_vg_nvme2n1"}}
ok: [host3.fqdn.tld] => (item={'vgname': 'gluster_vg_nvme1n1', 'pvname': '/dev/mapper/vdo_nvme1n1'}) => {"ansible_facts": {"vdo_devs": ["gluster_vg_nvme0n1", "gluster_vg_nvme2n1", "gluster_vg_nvme1n1"]}, "ansible_loop_var": "item", "changed": false, "item": {"pvname": "/dev/mapper/vdo_nvme1n1", "vgname": "gluster_vg_nvme1n1"}}
TASK [gluster.infra/roles/backend_setup : Configure lvm thinpool extend threshold] ***
task path: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/lvm_config.yml:5
skipping: [host1.fqdn.tld] => {"changed": false, "skip_reason": "Conditional result was False"}
skipping: [host2.fqdn.tld] => {"changed": false, "skip_reason": "Conditional result was False"}
skipping: [host3.fqdn.tld] => {"changed": false, "skip_reason": "Conditional result was False"}
TASK [gluster.infra/roles/backend_setup : Configure lvm thinpool extend percentage] ***
task path: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/lvm_config.yml:13
skipping: [host1.fqdn.tld] => {"changed": false, "skip_reason": "Conditional result was False"}
skipping: [host2.fqdn.tld] => {"changed": false, "skip_reason": "Conditional result was False"}
skipping: [host3.fqdn.tld] => {"changed": false, "skip_reason": "Conditional result was False"}
TASK [gluster.infra/roles/backend_setup : Check if vdo block device exists] ****
task path: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/vdo_create.yml:4
changed: [host2.fqdn.tld] => (item={'name': 'vdo_nvme0n1', 'device': '/dev/nvme0n1', 'slabsize': '2G', 'logicalsize': '1000G', 'blockmapcachesize': '128M', 'emulate512': 'off', 'writepolicy': 'auto', 'maxDiscardSize': '16M'}) => {"ansible_loop_var": "item", "changed": true, "cmd": "test -b /dev/nvme0n1 && echo \"1\" || echo \"0\"", "delta": "0:00:00.004118", "end": "2020-12-17 23:26:30.064152", "item": {"blockmapcachesize": "128M", "device": "/dev/nvme0n1", "emulate512": "off", "logicalsize": "1000G", "maxDiscardSize": "16M", "name": "vdo_nvme0n1", "slabsize": "2G", "writepolicy": "auto"}, "rc": 0, "start": "2020-12-17 23:26:30.060034", "stderr": "", "stderr_lines": [], "stdout": "1", "stdout_lines": ["1"]}
changed: [host1.fqdn.tld] => (item={'name': 'vdo_nvme0n1', 'device': '/dev/nvme0n1', 'slabsize': '2G', 'logicalsize': '1000G', 'blockmapcachesize': '128M', 'emulate512': 'off', 'writepolicy': 'auto', 'maxDiscardSize': '16M'}) => {"ansible_loop_var": "item", "changed": true, "cmd": "test -b /dev/nvme0n1 && echo \"1\" || echo \"0\"", "delta": "0:00:00.004220", "end": "2020-12-17 23:26:30.083000", "item": {"blockmapcachesize": "128M", "device": "/dev/nvme0n1", "emulate512": "off", "logicalsize": "1000G", "maxDiscardSize": "16M", "name": "vdo_nvme0n1", "slabsize": "2G", "writepolicy": "auto"}, "rc": 0, "start": "2020-12-17 23:26:30.078780", "stderr": "", "stderr_lines": [], "stdout": "1", "stdout_lines": ["1"]}
changed: [host3.fqdn.tld] => (item={'name': 'vdo_nvme0n1', 'device': '/dev/nvme0n1', 'slabsize': '2G', 'logicalsize': '1000G', 'blockmapcachesize': '128M', 'emulate512': 'off', 'writepolicy': 'auto', 'maxDiscardSize': '16M'}) => {"ansible_loop_var": "item", "changed": true, "cmd": "test -b /dev/nvme0n1 && echo \"1\" || echo \"0\"", "delta": "0:00:00.004168", "end": "2020-12-17 23:26:30.155008", "item": {"blockmapcachesize": "128M", "device": "/dev/nvme0n1", "emulate512": "off", "logicalsize": "1000G", "maxDiscardSize": "16M", "name": "vdo_nvme0n1", "slabsize": "2G", "writepolicy": "auto"}, "rc": 0, "start": "2020-12-17 23:26:30.150840", "stderr": "", "stderr_lines": [], "stdout": "1", "stdout_lines": ["1"]}
changed: [host2.fqdn.tld] => (item={'name': 'vdo_nvme2n1', 'device': '/dev/nvme2n1', 'slabsize': '32G', 'logicalsize': '5000G', 'blockmapcachesize': '128M', 'emulate512': 'off', 'writepolicy': 'auto', 'maxDiscardSize': '16M'}) => {"ansible_loop_var": "item", "changed": true, "cmd": "test -b /dev/nvme2n1 && echo \"1\" || echo \"0\"", "delta": "0:00:00.004140", "end": "2020-12-17 23:26:35.031938", "item": {"blockmapcachesize": "128M", "device": "/dev/nvme2n1", "emulate512": "off", "logicalsize": "5000G", "maxDiscardSize": "16M", "name": "vdo_nvme2n1", "slabsize": "32G", "writepolicy": "auto"}, "rc": 0, "start": "2020-12-17 23:26:35.027798", "stderr": "", "stderr_lines": [], "stdout": "1", "stdout_lines": ["1"]}
changed: [host1.fqdn.tld] => (item={'name': 'vdo_nvme2n1', 'device': '/dev/nvme2n1', 'slabsize': '32G', 'logicalsize': '5000G', 'blockmapcachesize': '128M', 'emulate512': 'off', 'writepolicy': 'auto', 'maxDiscardSize': '16M'}) => {"ansible_loop_var": "item", "changed": true, "cmd": "test -b /dev/nvme2n1 && echo \"1\" || echo \"0\"", "delta": "0:00:00.004216", "end": "2020-12-17 23:26:35.085358", "item": {"blockmapcachesize": "128M", "device": "/dev/nvme2n1", "emulate512": "off", "logicalsize": "5000G", "maxDiscardSize": "16M", "name": "vdo_nvme2n1", "slabsize": "32G", "writepolicy": "auto"}, "rc": 0, "start": "2020-12-17 23:26:35.081142", "stderr": "", "stderr_lines": [], "stdout": "1", "stdout_lines": ["1"]}
changed: [host3.fqdn.tld] => (item={'name': 'vdo_nvme2n1', 'device': '/dev/nvme2n1', 'slabsize': '32G', 'logicalsize': '5000G', 'blockmapcachesize': '128M', 'emulate512': 'off', 'writepolicy': 'auto', 'maxDiscardSize': '16M'}) => {"ansible_loop_var": "item", "changed": true, "cmd": "test -b /dev/nvme2n1 && echo \"1\" || echo \"0\"", "delta": "0:00:00.004114", "end": "2020-12-17 23:26:35.173650", "item": {"blockmapcachesize": "128M", "device": "/dev/nvme2n1", "emulate512": "off", "logicalsize": "5000G", "maxDiscardSize": "16M", "name": "vdo_nvme2n1", "slabsize": "32G", "writepolicy": "auto"}, "rc": 0, "start": "2020-12-17 23:26:35.169536", "stderr": "", "stderr_lines": [], "stdout": "1", "stdout_lines": ["1"]}
changed: [host2.fqdn.tld] => (item={'name': 'vdo_nvme1n1', 'device': '/dev/nvme1n1', 'slabsize': '32G', 'logicalsize': '5000G', 'blockmapcachesize': '128M', 'emulate512': 'off', 'writepolicy': 'auto', 'maxDiscardSize': '16M'}) => {"ansible_loop_var": "item", "changed": true, "cmd": "test -b /dev/nvme1n1 && echo \"1\" || echo \"0\"", "delta": "0:00:00.004195", "end": "2020-12-17 23:26:39.992465", "item": {"blockmapcachesize": "128M", "device": "/dev/nvme1n1", "emulate512": "off", "logicalsize": "5000G", "maxDiscardSize": "16M", "name": "vdo_nvme1n1", "slabsize": "32G", "writepolicy": "auto"}, "rc": 0, "start": "2020-12-17 23:26:39.988270", "stderr": "", "stderr_lines": [], "stdout": "1", "stdout_lines": ["1"]}
changed: [host1.fqdn.tld] => (item={'name': 'vdo_nvme1n1', 'device': '/dev/nvme1n1', 'slabsize': '32G', 'logicalsize': '5000G', 'blockmapcachesize': '128M', 'emulate512': 'off', 'writepolicy': 'auto', 'maxDiscardSize': '16M'}) => {"ansible_loop_var": "item", "changed": true, "cmd": "test -b /dev/nvme1n1 && echo \"1\" || echo \"0\"", "delta": "0:00:00.004222", "end": "2020-12-17 23:26:40.129536", "item": {"blockmapcachesize": "128M", "device": "/dev/nvme1n1", "emulate512": "off", "logicalsize": "5000G", "maxDiscardSize": "16M", "name": "vdo_nvme1n1", "slabsize": "32G", "writepolicy": "auto"}, "rc": 0, "start": "2020-12-17 23:26:40.125314", "stderr": "", "stderr_lines": [], "stdout": "1", "stdout_lines": ["1"]}
changed: [host3.fqdn.tld] => (item={'name': 'vdo_nvme1n1', 'device': '/dev/nvme1n1', 'slabsize': '32G', 'logicalsize': '5000G', 'blockmapcachesize': '128M', 'emulate512': 'off', 'writepolicy': 'auto', 'maxDiscardSize': '16M'}) => {"ansible_loop_var": "item", "changed": true, "cmd": "test -b /dev/nvme1n1 && echo \"1\" || echo \"0\"", "delta": "0:00:00.004205", "end": "2020-12-17 23:26:40.164598", "item": {"blockmapcachesize": "128M", "device": "/dev/nvme1n1", "emulate512": "off", "logicalsize": "5000G", "maxDiscardSize": "16M", "name": "vdo_nvme1n1", "slabsize": "32G", "writepolicy": "auto"}, "rc": 0, "start": "2020-12-17 23:26:40.160393", "stderr": "", "stderr_lines": [], "stdout": "1", "stdout_lines": ["1"]}
TASK [gluster.infra/roles/backend_setup : Record for missing devices for phase 2] ***
task path: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/vdo_create.yml:11
skipping: [host1.fqdn.tld] => (item={'cmd': 'test -b /dev/nvme0n1 && echo "1" || echo "0"', 'stdout': '1', 'stderr': '', 'rc': 0, 'start': '2020-12-17 23:26:30.078780', 'end': '2020-12-17 23:26:30.083000', 'delta': '0:00:00.004220', 'changed': True, 'invocation': {'module_args': {'_raw_params': 'test -b /dev/nvme0n1 && echo "1" || echo "0"', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['1'], 'stderr_lines': [], 'failed': False, 'item': {'name': 'vdo_nvme0n1', 'device': '/dev/nvme0n1', 'slabsize': '2G', 'logicalsize': '1000G', 'blockmapcachesize': '128M', 'emulate512': 'off', 'writepolicy': 'auto', 'maxDiscardSize': '16M'}, 'ansible_loop_var': 'item'}) => {"ansible_loop_var": "item", "changed": false, "item": {"ansible_loop_var": "item", "changed": true, "cmd": "test -b /dev/nvme0n1 && echo \"1\" || echo \"0\"", "delta": "0
:00:00.004220", "end": "2020-12-17 23:26:30.083000", "failed": false, "invocation": {"module_args": {"_raw_params": "test -b /dev/nvme0n1 && echo \"1\" || echo \"0\"", "_uses_shell": true, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "stdin_add_newline": true, "strip_empty_ends": true, "warn": true}}, "item": {"blockmapcachesize": "128M", "device": "/dev/nvme0n1", "emulate512": "off", "logicalsize": "1000G", "maxDiscardSize": "16M", "name": "vdo_nvme0n1", "slabsize": "2G", "writepolicy": "auto"}, "rc": 0, "start": "2020-12-17 23:26:30.078780", "stderr": "", "stderr_lines": [], "stdout": "1", "stdout_lines": ["1"]}, "skip_reason": "Conditional result was False"}
skipping: [host1.fqdn.tld] => (item={'cmd': 'test -b /dev/nvme2n1 && echo "1" || echo "0"', 'stdout': '1', 'stderr': '', 'rc': 0, 'start': '2020-12-17 23:26:35.081142', 'end': '2020-12-17 23:26:35.085358', 'delta': '0:00:00.004216', 'changed': True, 'invocation': {'module_args': {'_raw_params': 'test -b /dev/nvme2n1 && echo "1" || echo "0"', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['1'], 'stderr_lines': [], 'failed': False, 'item': {'name': 'vdo_nvme2n1', 'device': '/dev/nvme2n1', 'slabsize': '32G', 'logicalsize': '5000G', 'blockmapcachesize': '128M', 'emulate512': 'off', 'writepolicy': 'auto', 'maxDiscardSize': '16M'}, 'ansible_loop_var': 'item'}) => {"ansible_loop_var": "item", "changed": false, "item": {"ansible_loop_var": "item", "changed": true, "cmd": "test -b /dev/nvme2n1 && echo \"1\" || echo \"0\"", "delta": "
0:00:00.004216", "end": "2020-12-17 23:26:35.085358", "failed": false, "invocation": {"module_args": {"_raw_params": "test -b /dev/nvme2n1 && echo \"1\" || echo \"0\"", "_uses_shell": true, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "stdin_add_newline": true, "strip_empty_ends": true, "warn": true}}, "item": {"blockmapcachesize": "128M", "device": "/dev/nvme2n1", "emulate512": "off", "logicalsize": "5000G", "maxDiscardSize": "16M", "name": "vdo_nvme2n1", "slabsize": "32G", "writepolicy": "auto"}, "rc": 0, "start": "2020-12-17 23:26:35.081142", "stderr": "", "stderr_lines": [], "stdout": "1", "stdout_lines": ["1"]}, "skip_reason": "Conditional result was False"}
skipping: [host1.fqdn.tld] => (item={'cmd': 'test -b /dev/nvme1n1 && echo "1" || echo "0"', 'stdout': '1', 'stderr': '', 'rc': 0, 'start': '2020-12-17 23:26:40.125314', 'end': '2020-12-17 23:26:40.129536', 'delta': '0:00:00.004222', 'changed': True, 'invocation': {'module_args': {'_raw_params': 'test -b /dev/nvme1n1 && echo "1" || echo "0"', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['1'], 'stderr_lines': [], 'failed': False, 'item': {'name': 'vdo_nvme1n1', 'device': '/dev/nvme1n1', 'slabsize': '32G', 'logicalsize': '5000G', 'blockmapcachesize': '128M', 'emulate512': 'off', 'writepolicy': 'auto', 'maxDiscardSize': '16M'}, 'ansible_loop_var': 'item'}) => {"ansible_loop_var": "item", "changed": false, "item": {"ansible_loop_var": "item", "changed": true, "cmd": "test -b /dev/nvme1n1 && echo \"1\" || echo \"0\"", "delta": "
0:00:00.004222", "end": "2020-12-17 23:26:40.129536", "failed": false, "invocation": {"module_args": {"_raw_params": "test -b /dev/nvme1n1 && echo \"1\" || echo \"0\"", "_uses_shell": true, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "stdin_add_newline": true, "strip_empty_ends": true, "warn": true}}, "item": {"blockmapcachesize": "128M", "device": "/dev/nvme1n1", "emulate512": "off", "logicalsize": "5000G", "maxDiscardSize": "16M", "name": "vdo_nvme1n1", "slabsize": "32G", "writepolicy": "auto"}, "rc": 0, "start": "2020-12-17 23:26:40.125314", "stderr": "", "stderr_lines": [], "stdout": "1", "stdout_lines": ["1"]}, "skip_reason": "Conditional result was False"}
skipping: [host2.fqdn.tld] => (item={'cmd': 'test -b /dev/nvme0n1 && echo "1" || echo "0"', 'stdout': '1', 'stderr': '', 'rc': 0, 'start': '2020-12-17 23:26:30.060034', 'end': '2020-12-17 23:26:30.064152', 'delta': '0:00:00.004118', 'changed': True, 'invocation': {'module_args': {'_raw_params': 'test -b /dev/nvme0n1 && echo "1" || echo "0"', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['1'], 'stderr_lines': [], 'failed': False, 'item': {'name': 'vdo_nvme0n1', 'device': '/dev/nvme0n1', 'slabsize': '2G', 'logicalsize': '1000G', 'blockmapcachesize': '128M', 'emulate512': 'off', 'writepolicy': 'auto', 'maxDiscardSize': '16M'}, 'ansible_loop_var': 'item'}) => {"ansible_loop_var": "item", "changed": false, "item": {"ansible_loop_var": "item", "changed": true, "cmd": "test -b /dev/nvme0n1 && echo \"1\" || echo \"0\"", "delta": "0
:00:00.004118", "end": "2020-12-17 23:26:30.064152", "failed": false, "invocation": {"module_args": {"_raw_params": "test -b /dev/nvme0n1 && echo \"1\" || echo \"0\"", "_uses_shell": true, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "stdin_add_newline": true, "strip_empty_ends": true, "warn": true}}, "item": {"blockmapcachesize": "128M", "device": "/dev/nvme0n1", "emulate512": "off", "logicalsize": "1000G", "maxDiscardSize": "16M", "name": "vdo_nvme0n1", "slabsize": "2G", "writepolicy": "auto"}, "rc": 0, "start": "2020-12-17 23:26:30.060034", "stderr": "", "stderr_lines": [], "stdout": "1", "stdout_lines": ["1"]}, "skip_reason": "Conditional result was False"}
skipping: [host2.fqdn.tld] => (item={'cmd': 'test -b /dev/nvme2n1 && echo "1" || echo "0"', 'stdout': '1', 'stderr': '', 'rc': 0, 'start': '2020-12-17 23:26:35.027798', 'end': '2020-12-17 23:26:35.031938', 'delta': '0:00:00.004140', 'changed': True, 'invocation': {'module_args': {'_raw_params': 'test -b /dev/nvme2n1 && echo "1" || echo "0"', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['1'], 'stderr_lines': [], 'failed': False, 'item': {'name': 'vdo_nvme2n1', 'device': '/dev/nvme2n1', 'slabsize': '32G', 'logicalsize': '5000G', 'blockmapcachesize': '128M', 'emulate512': 'off', 'writepolicy': 'auto', 'maxDiscardSize': '16M'}, 'ansible_loop_var': 'item'}) => {"ansible_loop_var": "item", "changed": false, "item": {"ansible_loop_var": "item", "changed": true, "cmd": "test -b /dev/nvme2n1 && echo \"1\" || echo \"0\"", "delta": "
0:00:00.004140", "end": "2020-12-17 23:26:35.031938", "failed": false, "invocation": {"module_args": {"_raw_params": "test -b /dev/nvme2n1 && echo \"1\" || echo \"0\"", "_uses_shell": true, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "stdin_add_newline": true, "strip_empty_ends": true, "warn": true}}, "item": {"blockmapcachesize": "128M", "device": "/dev/nvme2n1", "emulate512": "off", "logicalsize": "5000G", "maxDiscardSize": "16M", "name": "vdo_nvme2n1", "slabsize": "32G", "writepolicy": "auto"}, "rc": 0, "start": "2020-12-17 23:26:35.027798", "stderr": "", "stderr_lines": [], "stdout": "1", "stdout_lines": ["1"]}, "skip_reason": "Conditional result was False"}
skipping: [host2.fqdn.tld] => (item={'cmd': 'test -b /dev/nvme1n1 && echo "1" || echo "0"', 'stdout': '1', 'stderr': '', 'rc': 0, 'start': '2020-12-17 23:26:39.988270', 'end': '2020-12-17 23:26:39.992465', 'delta': '0:00:00.004195', 'changed': True, 'invocation': {'module_args': {'_raw_params': 'test -b /dev/nvme1n1 && echo "1" || echo "0"', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['1'], 'stderr_lines': [], 'failed': False, 'item': {'name': 'vdo_nvme1n1', 'device': '/dev/nvme1n1', 'slabsize': '32G', 'logicalsize': '5000G', 'blockmapcachesize': '128M', 'emulate512': 'off', 'writepolicy': 'auto', 'maxDiscardSize': '16M'}, 'ansible_loop_var': 'item'}) => {"ansible_loop_var": "item", "changed": false, "item": {"ansible_loop_var": "item", "changed": true, "cmd": "test -b /dev/nvme1n1 && echo \"1\" || echo \"0\"", "delta": "
0:00:00.004195", "end": "2020-12-17 23:26:39.992465", "failed": false, "invocation": {"module_args": {"_raw_params": "test -b /dev/nvme1n1 && echo \"1\" || echo \"0\"", "_uses_shell": true, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "stdin_add_newline": true, "strip_empty_ends": true, "warn": true}}, "item": {"blockmapcachesize": "128M", "device": "/dev/nvme1n1", "emulate512": "off", "logicalsize": "5000G", "maxDiscardSize": "16M", "name": "vdo_nvme1n1", "slabsize": "32G", "writepolicy": "auto"}, "rc": 0, "start": "2020-12-17 23:26:39.988270", "stderr": "", "stderr_lines": [], "stdout": "1", "stdout_lines": ["1"]}, "skip_reason": "Conditional result was False"}
skipping: [host3.fqdn.tld] => (item={'cmd': 'test -b /dev/nvme0n1 && echo "1" || echo "0"', 'stdout': '1', 'stderr': '', 'rc': 0, 'start': '2020-12-17 23:26:30.150840', 'end': '2020-12-17 23:26:30.155008', 'delta': '0:00:00.004168', 'changed': True, 'invocation': {'module_args': {'_raw_params': 'test -b /dev/nvme0n1 && echo "1" || echo "0"', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['1'], 'stderr_lines': [], 'failed': False, 'item': {'name': 'vdo_nvme0n1', 'device': '/dev/nvme0n1', 'slabsize': '2G', 'logicalsize': '1000G', 'blockmapcachesize': '128M', 'emulate512': 'off', 'writepolicy': 'auto', 'maxDiscardSize': '16M'}, 'ansible_loop_var': 'item'}) => {"ansible_loop_var": "item", "changed": false, "item": {"ansible_loop_var": "item", "changed": true, "cmd": "test -b /dev/nvme0n1 && echo \"1\" || echo \"0\"", "delta": "0
:00:00.004168", "end": "2020-12-17 23:26:30.155008", "failed": false, "invocation": {"module_args": {"_raw_params": "test -b /dev/nvme0n1 && echo \"1\" || echo \"0\"", "_uses_shell": true, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "stdin_add_newline": true, "strip_empty_ends": true, "warn": true}}, "item": {"blockmapcachesize": "128M", "device": "/dev/nvme0n1", "emulate512": "off", "logicalsize": "1000G", "maxDiscardSize": "16M", "name": "vdo_nvme0n1", "slabsize": "2G", "writepolicy": "auto"}, "rc": 0, "start": "2020-12-17 23:26:30.150840", "stderr": "", "stderr_lines": [], "stdout": "1", "stdout_lines": ["1"]}, "skip_reason": "Conditional result was False"}
skipping: [host3.fqdn.tld] => (item={'cmd': 'test -b /dev/nvme2n1 && echo "1" || echo "0"', 'stdout': '1', 'stderr': '', 'rc': 0, 'start': '2020-12-17 23:26:35.169536', 'end': '2020-12-17 23:26:35.173650', 'delta': '0:00:00.004114', 'changed': True, 'invocation': {'module_args': {'_raw_params': 'test -b /dev/nvme2n1 && echo "1" || echo "0"', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['1'], 'stderr_lines': [], 'failed': False, 'item': {'name': 'vdo_nvme2n1', 'device': '/dev/nvme2n1', 'slabsize': '32G', 'logicalsize': '5000G', 'blockmapcachesize': '128M', 'emulate512': 'off', 'writepolicy': 'auto', 'maxDiscardSize': '16M'}, 'ansible_loop_var': 'item'}) => {"ansible_loop_var": "item", "changed": false, "item": {"ansible_loop_var": "item", "changed": true, "cmd": "test -b /dev/nvme2n1 && echo \"1\" || echo \"0\"", "delta": "
0:00:00.004114", "end": "2020-12-17 23:26:35.173650", "failed": false, "invocation": {"module_args": {"_raw_params": "test -b /dev/nvme2n1 && echo \"1\" || echo \"0\"", "_uses_shell": true, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "stdin_add_newline": true, "strip_empty_ends": true, "warn": true}}, "item": {"blockmapcachesize": "128M", "device": "/dev/nvme2n1", "emulate512": "off", "logicalsize": "5000G", "maxDiscardSize": "16M", "name": "vdo_nvme2n1", "slabsize": "32G", "writepolicy": "auto"}, "rc": 0, "start": "2020-12-17 23:26:35.169536", "stderr": "", "stderr_lines": [], "stdout": "1", "stdout_lines": ["1"]}, "skip_reason": "Conditional result was False"}
skipping: [host3.fqdn.tld] => (item={'cmd': 'test -b /dev/nvme1n1 && echo "1" || echo "0"', 'stdout': '1', 'stderr': '', 'rc': 0, 'start': '2020-12-17 23:26:40.160393', 'end': '2020-12-17 23:26:40.164598', 'delta': '0:00:00.004205', 'changed': True, 'invocation': {'module_args': {'_raw_params': 'test -b /dev/nvme1n1 && echo "1" || echo "0"', '_uses_shell': True, 'warn': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['1'], 'stderr_lines': [], 'failed': False, 'item': {'name': 'vdo_nvme1n1', 'device': '/dev/nvme1n1', 'slabsize': '32G', 'logicalsize': '5000G', 'blockmapcachesize': '128M', 'emulate512': 'off', 'writepolicy': 'auto', 'maxDiscardSize': '16M'}, 'ansible_loop_var': 'item'}) => {"ansible_loop_var": "item", "changed": false, "item": {"ansible_loop_var": "item", "changed": true, "cmd": "test -b /dev/nvme1n1 && echo \"1\" || echo \"0\"", "delta": "
0:00:00.004205", "end": "2020-12-17 23:26:40.164598", "failed": false, "invocation": {"module_args": {"_raw_params": "test -b /dev/nvme1n1 && echo \"1\" || echo \"0\"", "_uses_shell": true, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "stdin_add_newline": true, "strip_empty_ends": true, "warn": true}}, "item": {"blockmapcachesize": "128M", "device": "/dev/nvme1n1", "emulate512": "off", "logicalsize": "5000G", "maxDiscardSize": "16M", "name": "vdo_nvme1n1", "slabsize": "32G", "writepolicy": "auto"}, "rc": 0, "start": "2020-12-17 23:26:40.160393", "stderr": "", "stderr_lines": [], "stdout": "1", "stdout_lines": ["1"]}, "skip_reason": "Conditional result was False"}
TASK [gluster.infra/roles/backend_setup : set fact if it will at least install 1 vdo device] ***
task path: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/vdo_create.yml:17
ok: [host1.fqdn.tld] => (item={'name': 'vdo_nvme0n1', 'device': '/dev/nvme0n1', 'slabsize': '2G', 'logicalsize': '1000G', 'blockmapcachesize': '128M', 'emulate512': 'off', 'writepolicy': 'auto', 'maxDiscardSize': '16M'}) => {"ansible_facts": {"gluster_infra_vdo_will_create_vdo": true}, "ansible_index_var": "index", "ansible_loop_var": "item", "changed": false, "index": 0, "item": {"blockmapcachesize": "128M", "device": "/dev/nvme0n1", "emulate512": "off", "logicalsize": "1000G", "maxDiscardSize": "16M", "name": "vdo_nvme0n1", "slabsize": "2G", "writepolicy": "auto"}}
ok: [host1.fqdn.tld] => (item={'name': 'vdo_nvme2n1', 'device': '/dev/nvme2n1', 'slabsize': '32G', 'logicalsize': '5000G', 'blockmapcachesize': '128M', 'emulate512': 'off', 'writepolicy': 'auto', 'maxDiscardSize': '16M'}) => {"ansible_facts": {"gluster_infra_vdo_will_create_vdo": true}, "ansible_index_var": "index", "ansible_loop_var": "item", "changed": false, "index": 1, "item": {"blockmapcachesize": "128M", "device": "/dev/nvme2n1", "emulate512": "off", "logicalsize": "5000G", "maxDiscardSize": "16M", "name": "vdo_nvme2n1", "slabsize": "32G", "writepolicy": "auto"}}
ok: [host1.fqdn.tld] => (item={'name': 'vdo_nvme1n1', 'device': '/dev/nvme1n1', 'slabsize': '32G', 'logicalsize': '5000G', 'blockmapcachesize': '128M', 'emulate512': 'off', 'writepolicy': 'auto', 'maxDiscardSize': '16M'}) => {"ansible_facts": {"gluster_infra_vdo_will_create_vdo": true}, "ansible_index_var": "index", "ansible_loop_var": "item", "changed": false, "index": 2, "item": {"blockmapcachesize": "128M", "device": "/dev/nvme1n1", "emulate512": "off", "logicalsize": "5000G", "maxDiscardSize": "16M", "name": "vdo_nvme1n1", "slabsize": "32G", "writepolicy": "auto"}}
ok: [host2.fqdn.tld] => (item={'name': 'vdo_nvme0n1', 'device': '/dev/nvme0n1', 'slabsize': '2G', 'logicalsize': '1000G', 'blockmapcachesize': '128M', 'emulate512': 'off', 'writepolicy': 'auto', 'maxDiscardSize': '16M'}) => {"ansible_facts": {"gluster_infra_vdo_will_create_vdo": true}, "ansible_index_var": "index", "ansible_loop_var": "item", "changed": false, "index": 0, "item": {"blockmapcachesize": "128M", "device": "/dev/nvme0n1", "emulate512": "off", "logicalsize": "1000G", "maxDiscardSize": "16M", "name": "vdo_nvme0n1", "slabsize": "2G", "writepolicy": "auto"}}
ok: [host2.fqdn.tld] => (item={'name': 'vdo_nvme2n1', 'device': '/dev/nvme2n1', 'slabsize': '32G', 'logicalsize': '5000G', 'blockmapcachesize': '128M', 'emulate512': 'off', 'writepolicy': 'auto', 'maxDiscardSize': '16M'}) => {"ansible_facts": {"gluster_infra_vdo_will_create_vdo": true}, "ansible_index_var": "index", "ansible_loop_var": "item", "changed": false, "index": 1, "item": {"blockmapcachesize": "128M", "device": "/dev/nvme2n1", "emulate512": "off", "logicalsize": "5000G", "maxDiscardSize": "16M", "name": "vdo_nvme2n1", "slabsize": "32G", "writepolicy": "auto"}}
ok: [host2.fqdn.tld] => (item={'name': 'vdo_nvme1n1', 'device': '/dev/nvme1n1', 'slabsize': '32G', 'logicalsize': '5000G', 'blockmapcachesize': '128M', 'emulate512': 'off', 'writepolicy': 'auto', 'maxDiscardSize': '16M'}) => {"ansible_facts": {"gluster_infra_vdo_will_create_vdo": true}, "ansible_index_var": "index", "ansible_loop_var": "item", "changed": false, "index": 2, "item": {"blockmapcachesize": "128M", "device": "/dev/nvme1n1", "emulate512": "off", "logicalsize": "5000G", "maxDiscardSize": "16M", "name": "vdo_nvme1n1", "slabsize": "32G", "writepolicy": "auto"}}
ok: [host3.fqdn.tld] => (item={'name': 'vdo_nvme0n1', 'device': '/dev/nvme0n1', 'slabsize': '2G', 'logicalsize': '1000G', 'blockmapcachesize': '128M', 'emulate512': 'off', 'writepolicy': 'auto', 'maxDiscardSize': '16M'}) => {"ansible_facts": {"gluster_infra_vdo_will_create_vdo": true}, "ansible_index_var": "index", "ansible_loop_var": "item", "changed": false, "index": 0, "item": {"blockmapcachesize": "128M", "device": "/dev/nvme0n1", "emulate512": "off", "logicalsize": "1000G", "maxDiscardSize": "16M", "name": "vdo_nvme0n1", "slabsize": "2G", "writepolicy": "auto"}}
ok: [host3.fqdn.tld] => (item={'name': 'vdo_nvme2n1', 'device': '/dev/nvme2n1', 'slabsize': '32G', 'logicalsize': '5000G', 'blockmapcachesize': '128M', 'emulate512': 'off', 'writepolicy': 'auto', 'maxDiscardSize': '16M'}) => {"ansible_facts": {"gluster_infra_vdo_will_create_vdo": true}, "ansible_index_var": "index", "ansible_loop_var": "item", "changed": false, "index": 1, "item": {"blockmapcachesize": "128M", "device": "/dev/nvme2n1", "emulate512": "off", "logicalsize": "5000G", "maxDiscardSize": "16M", "name": "vdo_nvme2n1", "slabsize": "32G", "writepolicy": "auto"}}
ok: [host3.fqdn.tld] => (item={'name': 'vdo_nvme1n1', 'device': '/dev/nvme1n1', 'slabsize': '32G', 'logicalsize': '5000G', 'blockmapcachesize': '128M', 'emulate512': 'off', 'writepolicy': 'auto', 'maxDiscardSize': '16M'}) => {"ansible_facts": {"gluster_infra_vdo_will_create_vdo": true}, "ansible_index_var": "index", "ansible_loop_var": "item", "changed": false, "index": 2, "item": {"blockmapcachesize": "128M", "device": "/dev/nvme1n1", "emulate512": "off", "logicalsize": "5000G", "maxDiscardSize": "16M", "name": "vdo_nvme1n1", "slabsize": "32G", "writepolicy": "auto"}}
TASK [gluster.infra/roles/backend_setup : Install VDO dependencies] ************
task path: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/vdo_create.yml:26
ok: [host2.fqdn.tld] => {"changed": false, "msg": "Nothing to do", "rc": 0, "results": []}
ok: [host1.fqdn.tld] => {"changed": false, "msg": "Nothing to do", "rc": 0, "results": []}
ok: [host3.fqdn.tld] => {"changed": false, "msg": "Nothing to do", "rc": 0, "results": []}
TASK [gluster.infra/roles/backend_setup : set fact about vdo installed deps] ***
task path: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/vdo_create.yml:38
ok: [host1.fqdn.tld] => {"ansible_facts": {"gluster_infra_installed_vdo_deps": true}, "changed": false}
ok: [host2.fqdn.tld] => {"ansible_facts": {"gluster_infra_installed_vdo_deps": true}, "changed": false}
ok: [host3.fqdn.tld] => {"ansible_facts": {"gluster_infra_installed_vdo_deps": true}, "changed": false}
TASK [gluster.infra/roles/backend_setup : Enable and start vdo service] ********
task path: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/vdo_create.yml:44
ok: [host2.fqdn.tld] => {"changed": false, "enabled": true, "name": "vdo", "state": "started", "status": {"ActiveEnterTimestamp": "Thu 2020-12-17 23:08:42 UTC", "ActiveEnterTimestampMonotonic": "225810494", "ActiveExitTimestampMonotonic": "0", "ActiveState": "active", "After": "sysinit.target systemd-journald.socket systemd-remount-fs.service basic.target system.slice", "AllowIsolate": "no", "AllowedCPUs": "", "AllowedMemoryNodes": "", "AmbientCapabilities": "", "AssertResult": "yes", "AssertTimestamp": "Thu 2020-12-17 23:08:42 UTC", "AssertTimestampMonotonic": "225503610", "Before": "shutdown.target multi-user.target", "BlockIOAccounting": "no", "BlockIOWeight": "[not set]", "CPUAccounting": "no", "CPUAffinity": "", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "[not set]", "CPUUsageNSec": "[not set]", "CPUWeight": "[not set]", "CacheDirectoryMode": "0755", "CanIsolate": "no", "CanReload": "
no", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend", "CollectMode": "inactive", "ConditionResult": "yes", "ConditionTimestamp": "Thu 2020-12-17 23:08:42 UTC", "ConditionTimestampMonotonic": "225503606", "ConfigurationDirectoryMode": "0755", "Conflicts": "shutdown.target", "ControlGroup": "/system.slice/vdo.service", "ControlPID": "0", "DefaultDependencies": "yes", "Delegate": "no", "Description": "VDO volume services", "DevicePolicy": "auto", "Dynam
icUser": "no", "EffectiveCPUs": "", "EffectiveMemoryNodes": "", "ExecMainCode": "1", "ExecMainExitTimestamp": "Thu 2020-12-17 23:08:42 UTC", "ExecMainExitTimestampMonotonic": "225809896", "ExecMainPID": "2890", "ExecMainStartTimestamp": "Thu 2020-12-17 23:08:42 UTC", "ExecMainStartTimestampMonotonic": "225504510", "ExecMainStatus": "0", "ExecStart": "{ path=/usr/bin/vdo ; argv[]=/usr/bin/vdo start --all --confFile /etc/vdoconf.yml ; ignore_errors=no ; start_time=[Thu 2020-12-17 23:08:42 UTC] ; stop_time=[Thu 2020-12-17 23:08:42 UTC] ; pid=2890 ; code=exited ; status=0 }", "ExecStop": "{ path=/usr/bin/vdo ; argv[]=/usr/bin/vdo stop --all --confFile /etc/vdoconf.yml ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/usr/lib/systemd/system/vdo.service", "GID": "[not set]", "GuessMainPID": "yes", "IOAccounting": "no", "IOSchedulingClass": "0", "IOSchedulingPriority": "0"
, "IOWeight": "[not set]", "IPAccounting": "no", "IPEgressBytes": "18446744073709551615", "IPEgressPackets": "18446744073709551615", "IPIngressBytes": "18446744073709551615", "IPIngressPackets": "18446744073709551615", "Id": "vdo.service", "IgnoreOnIsolate": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestamp": "Thu 2020-12-17 23:08:42 UTC", "InactiveExitTimestampMonotonic": "225504548", "InvocationID": "f8be0be3573b4691b71538d14db9204b", "JobRunningTimeoutUSec": "infinity", "JobTimeoutAction": "none", "JobTimeoutUSec": "infinity", "KeyringMode": "private", "KillMode": "control-group", "KillSignal": "15", "LimitAS": "infinity", "LimitASSoft": "infinity", "LimitCORE": "infinity", "LimitCORESoft": "infinity", "LimitCPU": "infinity", "LimitCPUSoft": "infinity", "LimitDATA": "infinity", "LimitDATASoft": "infinity", "LimitFSIZE": "infinity", "LimitFSIZESoft": "infinity", "LimitLOCKS": "infinity", "LimitLOCKSSoft": "infinity", "LimitMEMLOCK": "6553
6", "LimitMEMLOCKSoft": "65536", "LimitMSGQUEUE": "819200", "LimitMSGQUEUESoft": "819200", "LimitNICE": "0", "LimitNICESoft": "0", "LimitNOFILE": "262144", "LimitNOFILESoft": "1024", "LimitNPROC": "1540534", "LimitNPROCSoft": "1540534", "LimitRSS": "infinity", "LimitRSSSoft": "infinity", "LimitRTPRIO": "0", "LimitRTPRIOSoft": "0", "LimitRTTIME": "infinity", "LimitRTTIMESoft": "infinity", "LimitSIGPENDING": "1540534", "LimitSIGPENDINGSoft": "1540534", "LimitSTACK": "infinity", "LimitSTACKSoft": "8388608", "LoadState": "loaded", "LockPersonality": "no", "LogLevelMax": "-1", "LogRateLimitBurst": "0", "LogRateLimitIntervalUSec": "0", "LogsDirectoryMode": "0755", "MainPID": "0", "MemoryAccounting": "yes", "MemoryCurrent": "0", "MemoryDenyWriteExecute": "no", "MemoryHigh": "infinity", "MemoryLimit": "infinity", "MemoryLow": "0", "MemoryMax": "infinity", "MemorySwapMax": "infinity", "MountAPIVFS": "no", "MountFlags": "", "NFileDescriptorStore": "0", "NRestarts": "0", "NUMAMask": "", "NUMAP
olicy": "n/a", "Names": "vdo.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "none", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "Perpetual": "no", "PrivateDevices": "no", "PrivateMounts": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "PrivateUsers": "no", "ProtectControlGroups": "no", "ProtectHome": "no", "ProtectKernelModules": "no", "ProtectKernelTunables": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "yes", "RemoveIPC": "no", "Requires": "sysinit.target system.slice", "Restart": "no", "RestartUSec": "100ms", "RestrictNamespaces": "no", "RestrictRealtime": "no", "RestrictSUIDSGID": "no", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "RuntimeDirectoryPreserve": "no", "RuntimeMaxUSec": "infinity", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes",
"Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardInputData": "", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitIntervalUSec": "10s", "StartupBlockIOWeight": "[not set]", "StartupCPUShares": "[not set]", "StartupCPUWeight": "[not set]", "StartupIOWeight": "[not set]", "StateChangeTimestamp": "Thu 2020-12-17 23:08:42 UTC", "StateChangeTimestampMonotonic": "225810494", "StateDirectoryMode": "0755", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "exited", "SuccessAction": "none", "SyslogFacility": "3", "SyslogLevel": "6", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "yes", "TasksCurrent": "0", "TasksMax": "2464855", "TimeoutStartUSec": "infinity", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "oneshot", "UID": "[not set]", "UMask": "0022
", "UnitFilePreset": "enabled", "UnitFileState": "enabled", "UtmpMode": "init", "WantedBy": "multi-user.target", "WatchdogTimestampMonotonic": "0", "WatchdogUSec": "0"}}
ok: [host1.fqdn.tld] => {"changed": false, "enabled": true, "name": "vdo", "state": "started", "status": {"ActiveEnterTimestamp": "Thu 2020-12-17 23:08:14 UTC", "ActiveEnterTimestampMonotonic": "193553430", "ActiveExitTimestampMonotonic": "0", "ActiveState": "active", "After": "systemd-remount-fs.service basic.target sysinit.target system.slice systemd-journald.socket", "AllowIsolate": "no", "AllowedCPUs": "", "AllowedMemoryNodes": "", "AmbientCapabilities": "", "AssertResult": "yes", "AssertTimestamp": "Thu 2020-12-17 23:08:14 UTC", "AssertTimestampMonotonic": "193253594", "Before": "shutdown.target multi-user.target", "BlockIOAccounting": "no", "BlockIOWeight": "[not set]", "CPUAccounting": "no", "CPUAffinity": "", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "[not set]", "CPUUsageNSec": "[not set]", "CPUWeight": "[not set]", "CacheDirectoryMode": "0755", "CanIsolate": "no", "CanReload": "
no", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend", "CollectMode": "inactive", "ConditionResult": "yes", "ConditionTimestamp": "Thu 2020-12-17 23:08:14 UTC", "ConditionTimestampMonotonic": "193253594", "ConfigurationDirectoryMode": "0755", "Conflicts": "shutdown.target", "ControlGroup": "/system.slice/vdo.service", "ControlPID": "0", "DefaultDependencies": "yes", "Delegate": "no", "Description": "VDO volume services", "DevicePolicy": "auto", "Dynam
icUser": "no", "EffectiveCPUs": "", "EffectiveMemoryNodes": "", "ExecMainCode": "1", "ExecMainExitTimestamp": "Thu 2020-12-17 23:08:14 UTC", "ExecMainExitTimestampMonotonic": "193552850", "ExecMainPID": "2895", "ExecMainStartTimestamp": "Thu 2020-12-17 23:08:14 UTC", "ExecMainStartTimestampMonotonic": "193254745", "ExecMainStatus": "0", "ExecStart": "{ path=/usr/bin/vdo ; argv[]=/usr/bin/vdo start --all --confFile /etc/vdoconf.yml ; ignore_errors=no ; start_time=[Thu 2020-12-17 23:08:14 UTC] ; stop_time=[Thu 2020-12-17 23:08:14 UTC] ; pid=2895 ; code=exited ; status=0 }", "ExecStop": "{ path=/usr/bin/vdo ; argv[]=/usr/bin/vdo stop --all --confFile /etc/vdoconf.yml ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/usr/lib/systemd/system/vdo.service", "GID": "[not set]", "GuessMainPID": "yes", "IOAccounting": "no", "IOSchedulingClass": "0", "IOSchedulingPriority": "0"
, "IOWeight": "[not set]", "IPAccounting": "no", "IPEgressBytes": "18446744073709551615", "IPEgressPackets": "18446744073709551615", "IPIngressBytes": "18446744073709551615", "IPIngressPackets": "18446744073709551615", "Id": "vdo.service", "IgnoreOnIsolate": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestamp": "Thu 2020-12-17 23:08:14 UTC", "InactiveExitTimestampMonotonic": "193254786", "InvocationID": "4f83d5c5be8a47af920bedbf375a1fdd", "JobRunningTimeoutUSec": "infinity", "JobTimeoutAction": "none", "JobTimeoutUSec": "infinity", "KeyringMode": "private", "KillMode": "control-group", "KillSignal": "15", "LimitAS": "infinity", "LimitASSoft": "infinity", "LimitCORE": "infinity", "LimitCORESoft": "infinity", "LimitCPU": "infinity", "LimitCPUSoft": "infinity", "LimitDATA": "infinity", "LimitDATASoft": "infinity", "LimitFSIZE": "infinity", "LimitFSIZESoft": "infinity", "LimitLOCKS": "infinity", "LimitLOCKSSoft": "infinity", "LimitMEMLOCK": "6553
6", "LimitMEMLOCKSoft": "65536", "LimitMSGQUEUE": "819200", "LimitMSGQUEUESoft": "819200", "LimitNICE": "0", "LimitNICESoft": "0", "LimitNOFILE": "262144", "LimitNOFILESoft": "1024", "LimitNPROC": "1540534", "LimitNPROCSoft": "1540534", "LimitRSS": "infinity", "LimitRSSSoft": "infinity", "LimitRTPRIO": "0", "LimitRTPRIOSoft": "0", "LimitRTTIME": "infinity", "LimitRTTIMESoft": "infinity", "LimitSIGPENDING": "1540534", "LimitSIGPENDINGSoft": "1540534", "LimitSTACK": "infinity", "LimitSTACKSoft": "8388608", "LoadState": "loaded", "LockPersonality": "no", "LogLevelMax": "-1", "LogRateLimitBurst": "0", "LogRateLimitIntervalUSec": "0", "LogsDirectoryMode": "0755", "MainPID": "0", "MemoryAccounting": "yes", "MemoryCurrent": "0", "MemoryDenyWriteExecute": "no", "MemoryHigh": "infinity", "MemoryLimit": "infinity", "MemoryLow": "0", "MemoryMax": "infinity", "MemorySwapMax": "infinity", "MountAPIVFS": "no", "MountFlags": "", "NFileDescriptorStore": "0", "NRestarts": "0", "NUMAMask": "", "NUMAP
olicy": "n/a", "Names": "vdo.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "none", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "Perpetual": "no", "PrivateDevices": "no", "PrivateMounts": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "PrivateUsers": "no", "ProtectControlGroups": "no", "ProtectHome": "no", "ProtectKernelModules": "no", "ProtectKernelTunables": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "yes", "RemoveIPC": "no", "Requires": "system.slice sysinit.target", "Restart": "no", "RestartUSec": "100ms", "RestrictNamespaces": "no", "RestrictRealtime": "no", "RestrictSUIDSGID": "no", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "RuntimeDirectoryPreserve": "no", "RuntimeMaxUSec": "infinity", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes",
"Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardInputData": "", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitIntervalUSec": "10s", "StartupBlockIOWeight": "[not set]", "StartupCPUShares": "[not set]", "StartupCPUWeight": "[not set]", "StartupIOWeight": "[not set]", "StateChangeTimestamp": "Thu 2020-12-17 23:08:14 UTC", "StateChangeTimestampMonotonic": "193553430", "StateDirectoryMode": "0755", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "exited", "SuccessAction": "none", "SyslogFacility": "3", "SyslogLevel": "6", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "yes", "TasksCurrent": "0", "TasksMax": "2464855", "TimeoutStartUSec": "infinity", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "oneshot", "UID": "[not set]", "UMask": "0022
", "UnitFilePreset": "enabled", "UnitFileState": "enabled", "UtmpMode": "init", "WantedBy": "multi-user.target", "WatchdogTimestampMonotonic": "0", "WatchdogUSec": "0"}}
ok: [host3.fqdn.tld] => {"changed": false, "enabled": true, "name": "vdo", "state": "started", "status": {"ActiveEnterTimestamp": "Thu 2020-12-17 23:09:24 UTC", "ActiveEnterTimestampMonotonic": "262694590", "ActiveExitTimestampMonotonic": "0", "ActiveState": "active", "After": "systemd-remount-fs.service systemd-journald.socket sysinit.target basic.target system.slice", "AllowIsolate": "no", "AllowedCPUs": "", "AllowedMemoryNodes": "", "AmbientCapabilities": "", "AssertResult": "yes", "AssertTimestamp": "Thu 2020-12-17 23:09:24 UTC", "AssertTimestampMonotonic": "262381494", "Before": "shutdown.target multi-user.target", "BlockIOAccounting": "no", "BlockIOWeight": "[not set]", "CPUAccounting": "no", "CPUAffinity": "", "CPUQuotaPerSecUSec": "infinity", "CPUSchedulingPolicy": "0", "CPUSchedulingPriority": "0", "CPUSchedulingResetOnFork": "no", "CPUShares": "[not set]", "CPUUsageNSec": "[not set]", "CPUWeight": "[not set]", "CacheDirectoryMode": "0755", "CanIsolate": "no", "CanReload": "
no", "CanStart": "yes", "CanStop": "yes", "CapabilityBoundingSet": "cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend", "CollectMode": "inactive", "ConditionResult": "yes", "ConditionTimestamp": "Thu 2020-12-17 23:09:24 UTC", "ConditionTimestampMonotonic": "262381494", "ConfigurationDirectoryMode": "0755", "Conflicts": "shutdown.target", "ControlGroup": "/system.slice/vdo.service", "ControlPID": "0", "DefaultDependencies": "yes", "Delegate": "no", "Description": "VDO volume services", "DevicePolicy": "auto", "Dynam
icUser": "no", "EffectiveCPUs": "", "EffectiveMemoryNodes": "", "ExecMainCode": "1", "ExecMainExitTimestamp": "Thu 2020-12-17 23:09:24 UTC", "ExecMainExitTimestampMonotonic": "262694018", "ExecMainPID": "2899", "ExecMainStartTimestamp": "Thu 2020-12-17 23:09:24 UTC", "ExecMainStartTimestampMonotonic": "262382322", "ExecMainStatus": "0", "ExecStart": "{ path=/usr/bin/vdo ; argv[]=/usr/bin/vdo start --all --confFile /etc/vdoconf.yml ; ignore_errors=no ; start_time=[Thu 2020-12-17 23:09:24 UTC] ; stop_time=[Thu 2020-12-17 23:09:24 UTC] ; pid=2899 ; code=exited ; status=0 }", "ExecStop": "{ path=/usr/bin/vdo ; argv[]=/usr/bin/vdo stop --all --confFile /etc/vdoconf.yml ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }", "FailureAction": "none", "FileDescriptorStoreMax": "0", "FragmentPath": "/usr/lib/systemd/system/vdo.service", "GID": "[not set]", "GuessMainPID": "yes", "IOAccounting": "no", "IOSchedulingClass": "0", "IOSchedulingPriority": "0"
, "IOWeight": "[not set]", "IPAccounting": "no", "IPEgressBytes": "18446744073709551615", "IPEgressPackets": "18446744073709551615", "IPIngressBytes": "18446744073709551615", "IPIngressPackets": "18446744073709551615", "Id": "vdo.service", "IgnoreOnIsolate": "no", "IgnoreSIGPIPE": "yes", "InactiveEnterTimestampMonotonic": "0", "InactiveExitTimestamp": "Thu 2020-12-17 23:09:24 UTC", "InactiveExitTimestampMonotonic": "262382356", "InvocationID": "6ba1a8bd692f4d45902b2727876f9a6f", "JobRunningTimeoutUSec": "infinity", "JobTimeoutAction": "none", "JobTimeoutUSec": "infinity", "KeyringMode": "private", "KillMode": "control-group", "KillSignal": "15", "LimitAS": "infinity", "LimitASSoft": "infinity", "LimitCORE": "infinity", "LimitCORESoft": "infinity", "LimitCPU": "infinity", "LimitCPUSoft": "infinity", "LimitDATA": "infinity", "LimitDATASoft": "infinity", "LimitFSIZE": "infinity", "LimitFSIZESoft": "infinity", "LimitLOCKS": "infinity", "LimitLOCKSSoft": "infinity", "LimitMEMLOCK": "6553
6", "LimitMEMLOCKSoft": "65536", "LimitMSGQUEUE": "819200", "LimitMSGQUEUESoft": "819200", "LimitNICE": "0", "LimitNICESoft": "0", "LimitNOFILE": "262144", "LimitNOFILESoft": "1024", "LimitNPROC": "1540534", "LimitNPROCSoft": "1540534", "LimitRSS": "infinity", "LimitRSSSoft": "infinity", "LimitRTPRIO": "0", "LimitRTPRIOSoft": "0", "LimitRTTIME": "infinity", "LimitRTTIMESoft": "infinity", "LimitSIGPENDING": "1540534", "LimitSIGPENDINGSoft": "1540534", "LimitSTACK": "infinity", "LimitSTACKSoft": "8388608", "LoadState": "loaded", "LockPersonality": "no", "LogLevelMax": "-1", "LogRateLimitBurst": "0", "LogRateLimitIntervalUSec": "0", "LogsDirectoryMode": "0755", "MainPID": "0", "MemoryAccounting": "yes", "MemoryCurrent": "0", "MemoryDenyWriteExecute": "no", "MemoryHigh": "infinity", "MemoryLimit": "infinity", "MemoryLow": "0", "MemoryMax": "infinity", "MemorySwapMax": "infinity", "MountAPIVFS": "no", "MountFlags": "", "NFileDescriptorStore": "0", "NRestarts": "0", "NUMAMask": "", "NUMAP
olicy": "n/a", "Names": "vdo.service", "NeedDaemonReload": "no", "Nice": "0", "NoNewPrivileges": "no", "NonBlocking": "no", "NotifyAccess": "none", "OOMScoreAdjust": "0", "OnFailureJobMode": "replace", "PermissionsStartOnly": "no", "Perpetual": "no", "PrivateDevices": "no", "PrivateMounts": "no", "PrivateNetwork": "no", "PrivateTmp": "no", "PrivateUsers": "no", "ProtectControlGroups": "no", "ProtectHome": "no", "ProtectKernelModules": "no", "ProtectKernelTunables": "no", "ProtectSystem": "no", "RefuseManualStart": "no", "RefuseManualStop": "no", "RemainAfterExit": "yes", "RemoveIPC": "no", "Requires": "sysinit.target system.slice", "Restart": "no", "RestartUSec": "100ms", "RestrictNamespaces": "no", "RestrictRealtime": "no", "RestrictSUIDSGID": "no", "Result": "success", "RootDirectoryStartOnly": "no", "RuntimeDirectoryMode": "0755", "RuntimeDirectoryPreserve": "no", "RuntimeMaxUSec": "infinity", "SameProcessGroup": "no", "SecureBits": "0", "SendSIGHUP": "no", "SendSIGKILL": "yes",
"Slice": "system.slice", "StandardError": "inherit", "StandardInput": "null", "StandardInputData": "", "StandardOutput": "journal", "StartLimitAction": "none", "StartLimitBurst": "5", "StartLimitIntervalUSec": "10s", "StartupBlockIOWeight": "[not set]", "StartupCPUShares": "[not set]", "StartupCPUWeight": "[not set]", "StartupIOWeight": "[not set]", "StateChangeTimestamp": "Thu 2020-12-17 23:09:24 UTC", "StateChangeTimestampMonotonic": "262694590", "StateDirectoryMode": "0755", "StatusErrno": "0", "StopWhenUnneeded": "no", "SubState": "exited", "SuccessAction": "none", "SyslogFacility": "3", "SyslogLevel": "6", "SyslogLevelPrefix": "yes", "SyslogPriority": "30", "SystemCallErrorNumber": "0", "TTYReset": "no", "TTYVHangup": "no", "TTYVTDisallocate": "no", "TasksAccounting": "yes", "TasksCurrent": "0", "TasksMax": "2464855", "TimeoutStartUSec": "infinity", "TimeoutStopUSec": "1min 30s", "TimerSlackNSec": "50000", "Transient": "no", "Type": "oneshot", "UID": "[not set]", "UMask": "0022
", "UnitFilePreset": "enabled", "UnitFileState": "enabled", "UtmpMode": "init", "WantedBy": "multi-user.target", "WatchdogTimestampMonotonic": "0", "WatchdogUSec": "0"}}
TASK [gluster.infra/roles/backend_setup : Create VDO with specified size] ******
task path: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/vdo_create.yml:53
failed: [host1.fqdn.tld] (item={'name': 'vdo_nvme0n1', 'device': '/dev/nvme0n1', 'slabsize': '2G', 'logicalsize': '1000G', 'blockmapcachesize': '128M', 'emulate512': 'off', 'writepolicy': 'auto', 'maxDiscardSize': '16M'}) => {"ansible_index_var": "index", "ansible_loop_var": "item", "changed": false, "err": "vdo: ERROR - Can't open /dev/nvme0n1 exclusively. Mounted filesystem?\n", "index": 0, "item": {"blockmapcachesize": "128M", "device": "/dev/nvme0n1", "emulate512": "off", "logicalsize": "1000G", "maxDiscardSize": "16M", "name": "vdo_nvme0n1", "slabsize": "2G", "writepolicy": "auto"}, "msg": "Creating VDO vdo_nvme0n1 failed.", "rc": 1}
failed: [host2.fqdn.tld] (item={'name': 'vdo_nvme0n1', 'device': '/dev/nvme0n1', 'slabsize': '2G', 'logicalsize': '1000G', 'blockmapcachesize': '128M', 'emulate512': 'off', 'writepolicy': 'auto', 'maxDiscardSize': '16M'}) => {"ansible_index_var": "index", "ansible_loop_var": "item", "changed": false, "err": "vdo: ERROR - Can't open /dev/nvme0n1 exclusively. Mounted filesystem?\n", "index": 0, "item": {"blockmapcachesize": "128M", "device": "/dev/nvme0n1", "emulate512": "off", "logicalsize": "1000G", "maxDiscardSize": "16M", "name": "vdo_nvme0n1", "slabsize": "2G", "writepolicy": "auto"}, "msg": "Creating VDO vdo_nvme0n1 failed.", "rc": 1}
failed: [host3.fqdn.tld] (item={'name': 'vdo_nvme0n1', 'device': '/dev/nvme0n1', 'slabsize': '2G', 'logicalsize': '1000G', 'blockmapcachesize': '128M', 'emulate512': 'off', 'writepolicy': 'auto', 'maxDiscardSize': '16M'}) => {"ansible_index_var": "index", "ansible_loop_var": "item", "changed": false, "err": "vdo: ERROR - Can't open /dev/nvme0n1 exclusively. Mounted filesystem?\n", "index": 0, "item": {"blockmapcachesize": "128M", "device": "/dev/nvme0n1", "emulate512": "off", "logicalsize": "1000G", "maxDiscardSize": "16M", "name": "vdo_nvme0n1", "slabsize": "2G", "writepolicy": "auto"}, "msg": "Creating VDO vdo_nvme0n1 failed.", "rc": 1}
failed: [host3.fqdn.tld] (item={'name': 'vdo_nvme2n1', 'device': '/dev/nvme2n1', 'slabsize': '32G', 'logicalsize': '5000G', 'blockmapcachesize': '128M', 'emulate512': 'off', 'writepolicy': 'auto', 'maxDiscardSize': '16M'}) => {"ansible_index_var": "index", "ansible_loop_var": "item", "changed": false, "err": "vdo: ERROR - Can't open /dev/nvme2n1 exclusively. Mounted filesystem?\n", "index": 1, "item": {"blockmapcachesize": "128M", "device": "/dev/nvme2n1", "emulate512": "off", "logicalsize": "5000G", "maxDiscardSize": "16M", "name": "vdo_nvme2n1", "slabsize": "32G", "writepolicy": "auto"}, "msg": "Creating VDO vdo_nvme2n1 failed.", "rc": 1}
failed: [host2.fqdn.tld] (item={'name': 'vdo_nvme2n1', 'device': '/dev/nvme2n1', 'slabsize': '32G', 'logicalsize': '5000G', 'blockmapcachesize': '128M', 'emulate512': 'off', 'writepolicy': 'auto', 'maxDiscardSize': '16M'}) => {"ansible_index_var": "index", "ansible_loop_var": "item", "changed": false, "err": "vdo: ERROR - Can't open /dev/nvme2n1 exclusively. Mounted filesystem?\n", "index": 1, "item": {"blockmapcachesize": "128M", "device": "/dev/nvme2n1", "emulate512": "off", "logicalsize": "5000G", "maxDiscardSize": "16M", "name": "vdo_nvme2n1", "slabsize": "32G", "writepolicy": "auto"}, "msg": "Creating VDO vdo_nvme2n1 failed.", "rc": 1}
failed: [host1.fqdn.tld] (item={'name': 'vdo_nvme2n1', 'device': '/dev/nvme2n1', 'slabsize': '32G', 'logicalsize': '5000G', 'blockmapcachesize': '128M', 'emulate512': 'off', 'writepolicy': 'auto', 'maxDiscardSize': '16M'}) => {"ansible_index_var": "index", "ansible_loop_var": "item", "changed": false, "err": "vdo: ERROR - Can't open /dev/nvme2n1 exclusively. Mounted filesystem?\n", "index": 1, "item": {"blockmapcachesize": "128M", "device": "/dev/nvme2n1", "emulate512": "off", "logicalsize": "5000G", "maxDiscardSize": "16M", "name": "vdo_nvme2n1", "slabsize": "32G", "writepolicy": "auto"}, "msg": "Creating VDO vdo_nvme2n1 failed.", "rc": 1}
failed: [host2.fqdn.tld] (item={'name': 'vdo_nvme1n1', 'device': '/dev/nvme1n1', 'slabsize': '32G', 'logicalsize': '5000G', 'blockmapcachesize': '128M', 'emulate512': 'off', 'writepolicy': 'auto', 'maxDiscardSize': '16M'}) => {"ansible_index_var": "index", "ansible_loop_var": "item", "changed": false, "err": "vdo: ERROR - Can't open /dev/nvme1n1 exclusively. Mounted filesystem?\n", "index": 2, "item": {"blockmapcachesize": "128M", "device": "/dev/nvme1n1", "emulate512": "off", "logicalsize": "5000G", "maxDiscardSize": "16M", "name": "vdo_nvme1n1", "slabsize": "32G", "writepolicy": "auto"}, "msg": "Creating VDO vdo_nvme1n1 failed.", "rc": 1}
failed: [host3.fqdn.tld] (item={'name': 'vdo_nvme1n1', 'device': '/dev/nvme1n1', 'slabsize': '32G', 'logicalsize': '5000G', 'blockmapcachesize': '128M', 'emulate512': 'off', 'writepolicy': 'auto', 'maxDiscardSize': '16M'}) => {"ansible_index_var": "index", "ansible_loop_var": "item", "changed": false, "err": "vdo: ERROR - Can't open /dev/nvme1n1 exclusively. Mounted filesystem?\n", "index": 2, "item": {"blockmapcachesize": "128M", "device": "/dev/nvme1n1", "emulate512": "off", "logicalsize": "5000G", "maxDiscardSize": "16M", "name": "vdo_nvme1n1", "slabsize": "32G", "writepolicy": "auto"}, "msg": "Creating VDO vdo_nvme1n1 failed.", "rc": 1}
failed: [host1.fqdn.tld] (item={'name': 'vdo_nvme1n1', 'device': '/dev/nvme1n1', 'slabsize': '32G', 'logicalsize': '5000G', 'blockmapcachesize': '128M', 'emulate512': 'off', 'writepolicy': 'auto', 'maxDiscardSize': '16M'}) => {"ansible_index_var": "index", "ansible_loop_var": "item", "changed": false, "err": "vdo: ERROR - Can't open /dev/nvme1n1 exclusively. Mounted filesystem?\n", "index": 2, "item": {"blockmapcachesize": "128M", "device": "/dev/nvme1n1", "emulate512": "off", "logicalsize": "5000G", "maxDiscardSize": "16M", "name": "vdo_nvme1n1", "slabsize": "32G", "writepolicy": "auto"}, "msg": "Creating VDO vdo_nvme1n1 failed.", "rc": 1}
NO MORE HOSTS LEFT *************************************************************
NO MORE HOSTS LEFT *************************************************************
PLAY RECAP *********************************************************************
host1.fqdn.tld : ok=24 changed=7 unreachable=0 failed=1 skipped=17 rescued=0 ignored=1
host2.fqdn.tld : ok=23 changed=6 unreachable=0 failed=1 skipped=17 rescued=0 ignored=1
host3.fqdn.tld : ok=23 changed=6 unreachable=0 failed=1 skipped=17 rescued=0 ignored=1
Please check /var/log/cockpit/ovirt-dashboard/gluster-deployment.log for more informations.
4 years, 4 months
fence_xvm for testing
by Alex K
Hi friends,
I was wondering what is needed to setup fence_xvm in order to use for power
management in virtual nested environments for testing purposes.
I have followed the following steps:
https://github.com/rightkick/Notes/blob/master/Ovirt-fence_xmv.md
I tried also
engine-config -s CustomFenceAgentMapping="fence_xvm=_fence_xvm"
From command line all seems fine and I can get the status of the host VMs,
but I was not able to find what is needed to set this up at engine UI:
[image: image.png]
At username and pass I just filled dummy values as they should not be
needed for fence_xvm.
I always get an error at GUI while engine logs give:
2020-12-14 08:53:48,343Z WARN
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-4) [07c1d540-6d8d-419c-affb-181495d75759] EVENT_ID:
VDS_ALERT_FENCE_TEST_FAILED(9,001), Power Management test failed for Host
kvm0.lab.local.Internal JSON-RPC error
2020-12-14 08:53:48,343Z INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.FenceVdsVDSCommand] (default
task-4) [07c1d540-6d8d-419c-affb-181495d75759] FINISH, FenceVdsVDSCommand,
return: FenceOperationResult:{status='ERROR', powerStatus='UNKNOWN',
message='Internal JSON-RPC error'}, log id: 2437b13c
2020-12-14 08:53:48,400Z WARN
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-4) [07c1d540-6d8d-419c-affb-181495d75759] EVENT_ID:
FENCE_OPERATION_USING_AGENT_AND_PROXY_FAILED(9,021), Execution of power
management status on Host kvm0.lab.local using Proxy Host kvm1.lab.local
and Fence Agent fence_xvm:225.0.0.12 failed.
2020-12-14 08:53:48,400Z WARN
[org.ovirt.engine.core.bll.pm.FenceAgentExecutor] (default task-4)
[07c1d540-6d8d-419c-affb-181495d75759] Fence action failed using proxy host
'kvm1.lab.local', trying another proxy
2020-12-14 08:53:48,485Z ERROR
[org.ovirt.engine.core.bll.pm.FenceProxyLocator] (default task-4)
[07c1d540-6d8d-419c-affb-181495d75759] Can not run fence action on host
'kvm0.lab.local', no suitable proxy host was found.
2020-12-14 08:53:48,486Z WARN
[org.ovirt.engine.core.bll.pm.FenceAgentExecutor] (default task-4)
[07c1d540-6d8d-419c-affb-181495d75759] Failed to find another proxy to
re-run failed fence action, retrying with the same proxy 'kvm1.lab.local'
2020-12-14 08:53:48,582Z WARN
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-4) [07c1d540-6d8d-419c-affb-181495d75759] EVENT_ID:
VDS_ALERT_FENCE_TEST_FAILED(9,001), Power Management test failed for Host
kvm0.lab.local.Internal JSON-RPC error
2020-12-14 08:53:48,582Z INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.FenceVdsVDSCommand] (default
task-4) [07c1d540-6d8d-419c-affb-181495d75759] FINISH, FenceVdsVDSCommand,
return: FenceOperationResult:{status='ERROR', powerStatus='UNKNOWN',
message='Internal JSON-RPC error'}, log id: 8607bc9
2020-12-14 08:53:48,637Z WARN
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-4) [07c1d540-6d8d-419c-affb-181495d75759] EVENT_ID:
FENCE_OPERATION_USING_AGENT_AND_PROXY_FAILED(9,021), Execution of power
management status on Host kvm0.lab.local using Proxy Host kvm1.lab.local
and Fence Agent fence_xvm:225.0.0.12 failed.
Any idea?
Thanx,
Alex
4 years, 4 months
High performance VM cannot migrate due to TSC frequency
by Gianluca Cecchi
Hello,
I'm in 4.4.3 and CentOS 8.3 with 3 hosts.
I have a high performance VM that is running on ov300 and is configured to
be run on any host.
It seems that both if I set or not the option
Migrate only to hosts with the same TSC frequency
I always am unable to migrate the VM and inside engine.log I see this:
2020-12-11 15:56:03,424+01 INFO
[org.ovirt.engine.core.bll.scheduling.SchedulingManager] (default task-36)
[e4801b28-c832-4474-aa53-4ebfd7c6e2d0] Candidate host 'ov301'
('382bfc8f-60d5-4e06-8571-7dae1700574d') was filtered out by
'VAR__FILTERTYPE__INTERNAL' filter 'Migration-Tsc-Frequency' (correlation
id: null)
2020-12-11 15:56:03,424+01 INFO
[org.ovirt.engine.core.bll.scheduling.SchedulingManager] (default task-36)
[e4801b28-c832-4474-aa53-4ebfd7c6e2d0] Candidate host 'ov200'
('949d0087-2c24-4759-8427-f9eade1dd2cc') was filtered out by
'VAR__FILTERTYPE__INTERNAL' filter 'Migration-Tsc-Frequency' (correlation
id: null)
Can you verify if it is only my problem?
Apart from the problem itself, what is "TSC frequency" and how can I check
if my 3 hosts are different or not indeed?
Normal VMs are able to migrate without problems
Thanks,
Gianluca
4 years, 4 months
[ANN] oVirt 4.4.4 Sixth Release Candidate is now available for testing
by Lev Veyde
oVirt 4.4.4 Sixth Release Candidate is now available for testing
The oVirt Project is pleased to announce the availability of oVirt 4.4.4
Sixth Release Candidate for testing, as of December 17th, 2020.
This update is the fourth in a series of stabilization updates to the 4.4
series.
How to prevent hosts entering emergency mode after upgrade from oVirt 4.4.1
Note: Upgrading from 4.4.2 GA or later should not require re-doing these
steps, if already performed while upgrading from 4.4.1 to 4.4.2 GA. These
are only required to be done once.
Due to Bug 1837864 <https://bugzilla.redhat.com/show_bug.cgi?id=1837864> -
Host enter emergency mode after upgrading to latest build
If you have your root file system on a multipath device on your hosts you
should be aware that after upgrading from 4.4.1 to 4.4.4 you may get your
host entering emergency mode.
In order to prevent this be sure to upgrade oVirt Engine first, then on
your hosts:
1.
Remove the current lvm filter while still on 4.4.1, or in emergency mode
(if rebooted).
2.
Reboot.
3.
Upgrade to 4.4.4 (redeploy in case of already being on 4.4.4).
4.
Run vdsm-tool config-lvm-filter to confirm there is a new filter in
place.
5.
Only if not using oVirt Node:
- run "dracut --force --add multipath” to rebuild initramfs with the
correct filter configuration
6.
Reboot.
Documentation
-
If you want to try oVirt as quickly as possible, follow the instructions
on the Download <https://ovirt.org/download/> page.
-
For complete installation, administration, and usage instructions, see
the oVirt Documentation <https://ovirt.org/documentation/>.
-
For upgrading from a previous version, see the oVirt Upgrade Guide
<https://ovirt.org/documentation/upgrade_guide/>.
-
For a general overview of oVirt, see About oVirt
<https://ovirt.org/community/about.html>.
Important notes before you try it
Please note this is a pre-release build.
The oVirt Project makes no guarantees as to its suitability or usefulness.
This pre-release must not be used in production.
Installation instructions
For installation instructions and additional information please refer to:
https://ovirt.org/documentation/
This release is available now on x86_64 architecture for:
* Red Hat Enterprise Linux 8.3 or newer
* CentOS Linux (or similar) 8.3 or newer
This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
for:
* Red Hat Enterprise Linux 8.3 or newer
* CentOS Linux (or similar) 8.3 or newer
* oVirt Node 4.4 based on CentOS Linux 8.3 (available for x86_64 only)
See the release notes [1] for installation instructions and a list of new
features and bugs fixed.
Notes:
- oVirt Appliance is already available for CentOS Linux 8
- oVirt Node NG is already available for CentOS Linux 8
Additional Resources:
* Read more about the oVirt 4.4.4 release highlights:
http://www.ovirt.org/release/4.4.4/
* Get more oVirt project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/
[1] http://www.ovirt.org/release/4.4.4/
[2] http://resources.ovirt.org/pub/ovirt-4.4-pre/iso/
--
Lev Veyde
Senior Software Engineer, RHCE | RHCVA | MCITP
Red Hat Israel
<https://www.redhat.com>
lev(a)redhat.com | lveyde(a)redhat.com
<https://red.ht/sig>
TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
4 years, 4 months
Network Teamd support
by Carlos C
Hi folks,
Does Ovirt 4.4.4 support or will support Network Teamd? Or only bonding?
regards
Carlos
4 years, 4 months
Cannot connect Glusterfs storage to Ovirt
by Ariez Ahito
HI guys, i have installed ovirt 4.4 hosted engine and a separate glusterfs storage.
now during hosted engine deployment when i try do choose
STORAGE TYPE: gluster
Storage connection: 10.33.50.33/VOL1
Mount Option:
when i try to connect
this gives me an error:
[ ERROR ] ovirtsdk4.Error: Fault reason is "Operation Failed". Fault detail is "[Problem while trying to mount target]". HTTP response code is 400.
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[Problem while trying to mount target]\". HTTP response code is 400."}
4 years, 4 months
Move self hosted engine to a different gluster volume
by ralf@os-s.de
Hi,
I apparently successfully upgraded a hyperconverged self hosted setup from 4.3 to 4.4. During this process the selfhosted engine required a new gluster volume (/engine-new). I used a temporary storage for that. Is it possible to move the SHE back to the original volume (/engine)?
What steps would be needed? Could I just do:
1. global maintenance
2. stop engine and SHE guest
3. copy all files from glusterfs /engine-new to /engine
4. use hosted-engine --set-shared-config storage <server1>:/engine
hosted-engine --set-shared-config mnt_options
backup-volfile-servers=<server2>:<server3>
5. disable maintenance
Or are additional steps required?
Kind regards,
Ralf
4 years, 4 months
Increase the initial size of LVM snapshots to prevent VM's from freezing
by Gal Villaret
Hi all,
Lately, I have been encountering an issue where VMs freeze during backups.
From what I can gather, this happens because some of the VMs sometimes perform large writes during the backup window and the snapshots dose not grow fast enough.
I use ISCSI storage with all VM disks preallocated.
Is there a configuration value I can change in order to increase the initial size of snapshots and also maybe change the watermark trigger for the expansion of snapshots?
Thanks.
4 years, 4 months
Bad CPU TYPE after Centos 8.3
by Lionel Caignec
Hi,
i've just upgraded one host to latest centos release. And after reboot ovirt said "Host CPU type is not compatible with Cluster Properties."
Looking on server i can see cpu is detected as skylake (cat /sys/devices/cpu/caps/pmu_name).
My CPU is Intel(R) Xeon(R) Gold 5220R CPU @ 2.20GHz so according to ark cascade lake server.
When i installed my new ovirt environment month agos, i configure cluster to be "secure Intel Cascade lake server family" and all sound goods.
So anyone can help me?
Environment :
OS : Centos 8.3 (for manager and host in error)
ovirt-engine 4.4.3.12-1.el8
Host with error : vdsm.x86_64 4.40.35.1-1.el8
Working Host : vdsm.x86_64 .40.26.3-1.el8
Maybe someone can help me? I don't know what to do, and does not want to try update another host if it fail...
Sorry if i post my message at wrong place.
Lionel Caignec.
4 years, 4 months
How to unlock disk images?
by thomas@hoberg.net
On one oVirt 4.3 farm I have three locked images I'd like to clear out.
One is an ISO image, that somehow never completed the transfer due to a slow network. It's occupying little space, except in the GUI where it sticks out and irritates. I guess it would just be an update somewhere on the Postgress database to unlock it and have it deletable: But since the schema isn't documented, I'd rather ask here: How to I unlock the image?
Two are left-overs from a snapshot that somehow never completed, one for the disk another for the RAM part. I don't know how my colleague managed to get into that state, but impatience/concurrency probably was a factor, a transient failure of a node could have been another.
In any case the snapshot operation logically has been going on for weeks without any real activity, survived several restarts (after patches) of all nodes and the ME and shows no sign of disappearing voluntarily.
Again, I'd assume that I need to clear out the snapshot job, unlock the images and then delete what's left. Some easy SQL and most likely a management engine restart afterwards... if you knew what you were doing (or there was an option in the GUI).
So how do I list/delete snapshot jobs that aren't really running any more?
And how do I unlock the images so I can delete them?
Thanks for your help!
4 years, 4 months
illegal disk status
by Daniel Menzel
Hi,
we have a problem with some VMs which cannot be started anymore due to
an illegal disk status of a snapshot.
What happend (most likely)? we tried to snapshot those vms some days ago
but the storage domain didn't have enough free space left. Yesterday we
shut those vms down - and from then on they didn't start anymore.
What have I tried so far?
1. Via the web interface I tried to remove the snapshot - didn't work.
2. Searched the internet. Found (among other stuff) this:
https://bugzilla.redhat.com/show_bug.cgi?id=1649129
3. via /vdsm-tool dump-volume-chains/ I managed to list those 5
snapshots (see below).
The output for one machine was:
image: 2d707743-4a9e-40bb-b223-83e3be672dfe
- 9ae6ea73-94b4-4588-9a6b-ea7a58ef93c9
status: OK, voltype: INTERNAL, format: RAW, legality:
LEGAL, type: PREALLOCATED, capacity: 32212254720, truesize: 32212254720
- f7d2c014-e8f5-4413-bfc5-4aa1426cb1e2
status: ILLEGAL, voltype: LEAF, format: COW, legality:
ILLEGAL, type: SPARSE, capacity: 32212254720, truesize: 29073408
So my idea was to follow the said bugzilla thread and update the volume
- but I didn't manage to find input for the /job_id/ and /generation/.
So my question is: Does anyone have an idea on how to (force) remove a
given snapshot via vsdm-{tool|client}?
Thanks in advance!
Daniel
--
Daniel Menzel
Geschäftsführer
Menzel IT GmbH
Charlottenburger Str. 33a
13086 Berlin
+49 (0) 30 / 5130 444 - 00
daniel.menzel(a)menzel-it.net
https://menzel-it.net
Geschäftsführer: Daniel Menzel, Josefin Menzel
Unternehmenssitz: Berlin
Handelsregister: Amtsgericht Charlottenburg
Handelsregister-Nummer: HRB 149835 B
USt-ID: DE 309 226 751
4 years, 4 months
Nodes install Python3 every day
by jb
Hello,
I notice a strange thing in the logs. All Nodes (ovirt 4.4.3.12-1.el8)
install every days again Python3, after checking for updates. The GUI
log shows this entries:
13.12.2020, 11:39 Check for update of host onode1.example.org.
Gathering Facts.
13.12.2020, 11:39 Check for update of host onode1.example.org.
include_tasks.
13.12.2020, 11:39 Check for update of host onode1.example.org.
Detect host operating system.
13.12.2020, 11:39 Check for update of host onode1.example.org.
Fetch installed packages.
13.12.2020, 11:39 Check for update of host onode1.example.org.
Check if vdsm is preinstalled.
13.12.2020, 11:39 Check for update of host onode1.example.org.
Parse operating system release.
13.12.2020, 11:39 Check for update of host onode1.example.org.
Detect if host is a prebuilt image.
13.12.2020, 11:39 Check for update of host onode1.example.org.
Install Python3 for CentOS/RHEL8 hosts.
13.12.2020, 11:39 Check for update of host onode1.example.org.
Set facts.
/var/log/ovirt-engine/ansible-runner-service.log shows:
2020-12-13 11:39:20,849 - runner_service.services.playbook - DEBUG -
cb_event_handler event_data={'uuid':
'7c4b039d-6212-4b52-95fd-40d85036ed98', 'counter': 33, 'stdout':
'ok: [onode1.example.org]', 'start_line': 31, 'end_line': 32,
'runner_ident': '72737578-3d2f-11eb-b955-00163e33f845', 'event':
'runner_on_ok', 'pid': 603696, 'created':
'2020-12-13T10:39:20.847869', 'parent_uuid':
'00163e33-f845-ee64-acee-000000000013', 'event_data': {'playbook':
'ovirt-host-check-upgrade.yml', 'playbook_uuid':
'0eb5c935-9f17-4b07-961e-7e0a866dd5ed', 'play': 'all', 'play_uuid':
'00163e33-f845-ee64-acee-000000000008', 'play_pattern': 'all',
'task': 'Install Python3 for CentOS/RHEL8 hosts', 'task_uuid':
'00163e33-f845-ee64-acee-000000000013', 'task_action': 'yum',
'task_args': '', 'task_path':
'/usr/share/ovirt-engine/ansible-runner-service-project/project/roles/ovirt-host-deploy-facts/tasks/main.yml:20',
'role': 'ovirt-host-deploy-facts', 'host': 'onode1.example.org',
'remote_addr': 'onode1.example.org', 'res': {'msg': 'Nothing to do',
'changed': False, 'results': [], 'rc': 0, 'invocation':
{'module_args': {'name': ['python3'], 'state': 'present',
'allow_downgrade': False, 'autoremove': False, 'bugfix': False,
'disable_gpg_check': False, 'disable_plugin': [], 'disablerepo': [],
'download_only': False, 'enable_plugin': [], 'enablerepo': [],
'exclude': [], 'installroot': '/', 'install_repoquery': True,
'install_weak_deps': True, 'security': False, 'skip_broken': False,
'update_cache': False, 'update_only': False, 'validate_certs': True,
'lock_timeout': 30, 'conf_file': None, 'disable_excludes': None,
'download_dir': None, 'list': None, 'releasever': None}},
'_ansible_no_log': False}, 'start': '2020-12-13T10:39:19.872585',
'end': '2020-12-13T10:39:20.847636', 'duration': 0.975051,
'event_loop': None, 'uuid': '7c4b039d-6212-4b52-95fd-40d85036ed98'}}
Is this a bug?
Best regards
Jonathan
4 years, 4 months
Re: CentOS 8 is dead
by marcel d'heureuse
So, I think keep the live system on ovirt 4.3 to be sure that's works after 2021?
Distribution you have 10 years support? Centos 7 has support up to June 24.
Someone starts to evolute Gentoo?
marcel
Am 8. Dezember 2020 21:15:48 MEZ schrieb "Vinícius Ferrão via Users" <users(a)ovirt.org>:
>CentOS Stream is unstable at best.
>
>I’ve used it recently and it was just a mess. There’s no binary
>compatibility with the current point release and there’s no version
>pinning. So it will be really difficult to keep track of things.
>
>I’m really curious how oVirt will handle this.
>
>From: Wesley Stewart <wstewart3(a)gmail.com>
>Sent: Tuesday, December 8, 2020 4:56 PM
>To: Strahil Nikolov <hunter86_bg(a)yahoo.com>
>Cc: users <users(a)ovirt.org>
>Subject: [ovirt-users] Re: CentOS 8 is dead
>
>This is a little concerning.
>
>But it seems pretty easy to convert:
>https://www.centos.org/centos-stream/
>
>However I would be curious to see if someone tests this with having an
>active ovirt node!
>
>On Tue, Dec 8, 2020 at 2:39 PM Strahil Nikolov via Users
><users(a)ovirt.org<mailto:users@ovirt.org>> wrote:
>Hello All,
>
>I'm really worried about the following news:
>https://blog.centos.org/2020/12/future-is-centos-stream/
>
>Did anyone tried to port oVirt to SLES/openSUSE or any Debian-based
>distro ?
>
>Best Regards,
>Strahil Nikolov
>_______________________________________________
>Users mailing list -- users(a)ovirt.org<mailto:users@ovirt.org>
>To unsubscribe send an email to
>users-leave(a)ovirt.org<mailto:users-leave@ovirt.org>
>Privacy Statement: https://www.ovirt.org/privacy-policy.html
>oVirt Code of Conduct:
>https://www.ovirt.org/community/about/community-guidelines/
>List Archives:
>https://lists.ovirt.org/archives/list/users@ovirt.org/message/HZC4D4OSYL6...
--
Diese Nachricht wurde von meinem Android-Gerät mit K-9 Mail gesendet.
4 years, 4 months
what's error?
by tommy
Hi,I installed oVirt on my hosts. But there are many such errors on the
console of the host, such as:
Why ???
Thanks.
4 years, 4 months
pgrade 4.3 to 4.4 with migration CentOS7 to CentOS8.3
by Ilya Fedotov
Good day,
Encountered such a problem when migrating to ovirt 4.4
At
hosted-engine --deploy --restore-from-file=backup.bck
Getting, error below
Upgrading engine extension configuration:
/etc/ovirt-engine/extensions.d/xx-xxxx.properties", "[ INFO ] Upgrading
CA", "[ INFO ]
Creating CA: /etc/pki/ovirt-engine/qemu-ca.pem", "[ ERROR ] Failed to
execute stage 'Misc configuration': [Errno 17]
File exists: '/etc/pki/ovirt-engine/ca.pem' ->
'/etc/pki/ovirt-engine/apache-ca.pem'", "[ INFO ]
DNF Performing DNF transaction rollback", "[ INFO ] Stage: Clean up",
At setting of initial parameters I select "No" parameter in para
'Renew engine CA on restore if needed? Please notice ' 'that if you choose
Yes, all hosts will have to be ' 'later manually reinstalled from the
engine. ' '(@VALUES@)[@DEFAULT@]
Dosnt need to renew the .ca certificate, thats upgrade and dosnt need to
re-make connections with nodes!
Even with this item, he still tries to create a new certificate.
I found a similar question here:
https://www.mail-archive.com/users@ovirt.org/msg61114.html
Package Data:
ovirt-hosted-engine-setup-2.4.8-1.el8.noarch
ovirt-hosted-engine-ha-2.4.5-1.el8.noarch
ovirt-engine-appliance-4.4-20201110154142.1.el8.x86_64
CentOS Linux release 8.3.2011
4.18.0-240.1.1.el8_3.x86_64
Pls, help programmers.....
with br, Ilya Fedotov
4 years, 4 months
hosted engine wrong bios
by Michael Rohweder
Hi,
i run with ovirt node 4.4.2 in some old mistake.
I changed cluster default to uefi weeks ago.
now today node must be restarted, and now i cannot work.
manager VM try to boot on uefi. and all other vm are down, because i cannot
start anny with cli.
how can i change (some config, file or something els) that setting in this
vm to normal bios?
Greetings
Michael
4 years, 4 months
Hosted engine deployment w/ two networks (one migration, one management).
by Gilboa Davara
Hello all,
I'm slowly building a new ovirt over glusterfs cluster with 3 fairly beefy
servers.
Each of the nodes has the following network configuration:
3x1GbE: ILO, ovirtmgmt and SSH.
4x10GbE: Private and external VM network(s).
2x40GBE: GlusterFS and VM migration.
Now, for some odd reasons, I rather keep the two 40GbE networks
disconnected from my normal management network.
My question is simple: I remember that I can somehow configure ovirt to use
two different networks for for management / migration, but as far as I can
see, I cannot configure the cluster to use a different network for
migration purposes.
1. Am I missing something?
2. Can I somehow configure the hosted engine to have an IP in more than
network (management and migration)?
3. More of a gluster question: As the 40GbE NICs and 1GbE NIC sitting on
different switches, can I somehow configure gluster to fallback to the 1GbE
NIC if the main 40GbE link fails? AFAIR bond doesn't support asymmetrical
network device configuration. (And rightly so, in this case).
Thanks,
Gilboa
4 years, 4 months
VMs shut down after backup: "moved from 'Up' --> 'Down'" on RHEL host
by Łukasz Kołaciński
Hello,
Thank you for helping with previous issue. Unfortunately we got another. We have RHV manager with several different hosts. After the backup, vm placed on RHEL shuts down. In engine.log I found moment when this happen: "VM '35183baa-1c70-4016-b7cd-528889876f19'(stor2rrd) moved from 'Up' --> 'Down'". I attached whole logs to email. It doesn't matter if it's a full or incremental, results are always the same. RHVH hosts works properly.
Log fragment:
2020-12-08 10:18:33,845+01 INFO [org.ovirt.engine.core.bll.storage.disk.image.ImageTransferUpdater] (default task-76) [60da31e3-92f6-4555-8c43-2f8afee272e0] Updating image transfer 87bdb42e-e64c-460d-97ac-218e923336a1 (image e57e4af0-5d0b-4f60-9e6c-e217c666e5e6) phase to Finalizing Success
2020-12-08 10:18:33,940+01 INFO [org.ovirt.engine.core.bll.StopVmBackupCommand] (default task-76) [89ec1a77-4b46-42b0-9d0f-15e53d5f952a] Running command: StopVmBackupCommand internal: false. Entities affected : ID: 35183baa-1c70-4016-b7cd-528889876f19 Type: VMAction group BACKUP_DISK with role type ADMIN, ID: e57e4af0-5d0b-4f60-9e6c-e217c666e5e6 Type: DiskAction group BACKUP_DISK with role type ADMIN
2020-12-08 10:18:33,940+01 INFO [org.ovirt.engine.core.bll.StopVmBackupCommand] (default task-76) [89ec1a77-4b46-42b0-9d0f-15e53d5f952a] Stopping VmBackup 'aae03819-cea6-45a1-9ee5-0f831af8464d'
2020-12-08 10:18:33,952+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.StopVmBackupVDSCommand] (default task-76) [89ec1a77-4b46-42b0-9d0f-15e53d5f952a] START, StopVmBackupVDSCommand(HostName = rhv-2, VmBackupVDSParameters:{hostId='afad6b8b-78a6-4e9a-a9bd-783ad42a2d47', backupId='aae03819-cea6-45a1-9ee5-0f831af8464d'}), log id: 78b2c27a
2020-12-08 10:18:33,958+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.StopVmBackupVDSCommand] (default task-76) [89ec1a77-4b46-42b0-9d0f-15e53d5f952a] FINISH, StopVmBackupVDSCommand, return: , log id: 78b2c27a
2020-12-08 10:18:33,975+01 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-76) [89ec1a77-4b46-42b0-9d0f-15e53d5f952a] EVENT_ID: VM_BACKUP_FINALIZED(10,794), Backup <UNKNOWN> for VM stor2rrd finalized (User: admin@internal-authz).
2020-12-08 10:18:35,221+01 INFO [org.ovirt.engine.core.sso.servlets.OAuthRevokeServlet] (default task-73) [] User admin@internal successfully logged out
2020-12-08 10:18:35,236+01 INFO [org.ovirt.engine.core.bll.aaa.TerminateSessionsForTokenCommand] (default task-81) [1b35276c] Running command: TerminateSessionsForTokenCommand internal: true.
2020-12-08 10:18:35,236+01 INFO [org.ovirt.engine.core.bll.aaa.SessionDataContainer] (default task-81) [1b35276c] Not removing session '90TxdK0PBueLijy+sCrFoHC/KNUGNzNpZuYMK/yKDAkbAefFr+8wOJsATsDKv18LxpyxCl+eX7hTHNxN23anAw==', session has running commands for user 'admin@internal-authz'.
2020-12-08 10:18:35,447+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-31) [] VM '35183baa-1c70-4016-b7cd-528889876f19' was reported as Down on VDS 'afad6b8b-78a6-4e9a-a9bd-783ad42a2d47'(rhv-2)
2020-12-08 10:18:35,448+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-31) [] START, DestroyVDSCommand(HostName = rhv-2, DestroyVmVDSCommandParameters:{hostId='afad6b8b-78a6-4e9a-a9bd-783ad42a2d47', vmId='35183baa-1c70-4016-b7cd-528889876f19', secondsToWait='0', gracefully='false', reason='', ignoreNoVm='true'}), log id: 4f473135
2020-12-08 10:18:35,451+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-31) [] FINISH, DestroyVDSCommand, return: , log id: 4f473135
2020-12-08 10:18:35,451+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-31) [] VM '35183baa-1c70-4016-b7cd-528889876f19'(stor2rrd) moved from 'Up' --> 'Down'
2020-12-08 10:18:35,466+01 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-31) [] EVENT_ID: VM_DOWN_ERROR(119), VM stor2rrd is down with error. Exit message: Lost connection with qemu process.
2020-12-08 10:18:35,484+01 INFO [org.ovirt.engine.core.bll.ProcessDownVmCommand] (EE-ManagedThreadFactory-engine-Thread-34085) [3da40159] Running command: ProcessDownVmCommand internal: true.
The environment of faulty host is:
OS Version: RHEL - 8.3 - 1.0.el8
OS Description: Red Hat Enterprise Linux 8.3 (Ootpa)
Kernel Version: 4.18.0 - 240.1.1.el8_3.x86_64
KVM Version: 5.1.0 - 14.module+el8.3.0+8438+644aff69
LIBVIRT Version: libvirt-6.6.0-7.module+el8.3.0+8424+5ea525c5
VDSM Version: vdsm-4.40.26.3-1.el8ev
SPICE Version: 0.14.3 - 3.el8
GlusterFS Version: [N/A]
CEPH Version: librbd1-12.2.7-9.el8
Open vSwitch Version: openvswitch-2.11-7.el8ev
Nmstate Version: nmstate-0.3.4-13.el8_3
Regards
Łukasz Kołaciński
Junior Java Developer
e-mail: l.kolacinski(a)storware.eu<mailto:l.kolacinski@storware.eu>
<mailto:m.helbert@storware.eu>
[STORWARE]<http://www.storware.eu/>
ul. Leszno 8/44
01-192 Warszawa
www.storware.eu <https://www.storware.eu/>
[facebook]<https://www.facebook.com/storware>
[twitter]<https://twitter.com/storware>
[linkedin]<https://www.linkedin.com/company/storware>
[Storware_Stopka_09]<https://www.youtube.com/channel/UCKvLitYPyAplBctXibFWrkw>
Storware Spółka z o.o. nr wpisu do ewidencji KRS dla M.St. Warszawa 000510131 , NIP 5213672602. Wiadomość ta jest przeznaczona jedynie dla osoby lub podmiotu, który jest jej adresatem i może zawierać poufne i/lub uprzywilejowane informacje. Zakazane jest jakiekolwiek przeglądanie, przesyłanie, rozpowszechnianie lub inne wykorzystanie tych informacji lub podjęcie jakichkolwiek działań odnośnie tych informacji przez osoby lub podmioty inne niż zamierzony adresat. Jeżeli Państwo otrzymali przez pomyłkę tę informację prosimy o poinformowanie o tym nadawcy i usunięcie tej wiadomości z wszelkich komputerów. This message is intended only for the person or entity to which it is addressed and may contain confidential and/or privileged material. Any review, retransmission, dissemination or other use of, or taking of any action in reliance upon, this information by persons or entities other than the intended recipient is prohibited. If you have received this message in error, please contact the sender and remove the material from all of your computer systems.
4 years, 4 months
oVirt and RHEV
by tommy
1、 If oVirt can be used to manage RHEV ?
2、 What relation between oVirt and RHEV?
Thanks!
4 years, 4 months
OPNsense / FreeBSD 12.1
by Jorge Visentini
Hi all.
I tried to install OPNsense 20.7.6 (FreeBSD 12.1) and it was not possible
to detect the NICs.
I tried both the virtio driver and the e1000. Virtio does not detect and
e1000 crashes at startup.
In pure KVM, it works, so I believe there is some incompatibility with
oVirt 4.4.4.
Any tips?
--
Att,
Jorge Visentini
+55 55 98432-9868
4 years, 4 months
Recent news & oVirt future
by Charles Kozler
I guess this is probably a question for all current open source projects
that red hat runs but -
Does this mean oVirt will effectively become a rolling release type
situation as well?
How exactly is oVirt going to stay open source and stay in cadence with all
the other updates happening around it on packages/etc that it depends on if
the streams are rolling release? Do they now need to fork every piece of
dependency?
What exactly does this mean for oVirt going forward and its overall
stability?
--
*Notice to Recipient*: https://www.fixflyer.com/disclaimer
<https://www.fixflyer.com/disclaimer>
4 years, 4 months