Thanks! After adding the workaround, I was able to complete the deployment.
On Fri, May 10, 2019 at 1:39 AM Parth Dhanjal <dparth(a)redhat.com> wrote:
Hey!
oVirt 4.3.3 uses gluster-ansible-roles to deploy the storage.
There are multiple checks during a deployment.
This particular check which is failing is a part of the
gluster-ansible-featues (
https://github.com/gluster/gluster-ansible-features/tree/master/roles/glu...
)
A simple workaround can be to skip the test, by editing the finally
generated inventory file in the last step before deployment and adding
gluster_features_force_varlogsizecheck:
false under the vars section of the file.
Regards
Parth Dhanjal
On Fri, May 10, 2019 at 5:58 AM Edward Berger <edwberger(a)gmail.com> wrote:
> I'm trying to bring up a single node hyperconverged with the current
> node-ng ISO installation,
> but it ends with this failure message.
>
> TASK [gluster.features/roles/gluster_hci : Check if /var/log has enough
> disk space] ***
> fatal: [
br014.bridges.psc.edu]: FAILED! => {"changed": true,
"cmd": "df
> -m /var/log | awk '/[0-9]%/ {print $4}'", "delta":
"0:00:00.008513", "end":
> "2019-05-09 20:09:27.914400", "failed_when_result": true,
"rc": 0, "start":
> "2019-05-09 20:09:27.905887", "stderr": "",
"stderr_lines": [], "stdout":
> "7470", "stdout_lines": ["7470"]}
>
> I have what the installer created by default for /var/log, so I don't
> know why its complaining.
>
> [root@br014 ~]# df -kh
> Filesystem Size
> Used Avail Use% Mounted on
> /dev/mapper/onn_br014-ovirt--node--ng--4.3.3.1--0.20190417.0+1 3.5T
> 2.1G 3.3T 1% /
> devtmpfs 63G
> 0 63G 0% /dev
> tmpfs 63G
> 4.0K 63G 1% /dev/shm
> tmpfs 63G
> 18M 63G 1% /run
> tmpfs 63G
> 0 63G 0% /sys/fs/cgroup
> /dev/mapper/onn_br014-home 976M
> 2.6M 907M 1% /home
> /dev/mapper/onn_br014-tmp 976M
> 2.8M 906M 1% /tmp
> /dev/mapper/onn_br014-var 15G
> 42M 14G 1% /var
> /dev/sda2 976M
> 173M 737M 19% /boot
> /dev/mapper/onn_br014-var_log 7.8G
> 41M 7.3G 1% /var/log
> /dev/mapper/onn_br014-var_log_audit 2.0G
> 7.6M 1.8G 1% /var/log/audit
> /dev/mapper/onn_br014-var_crash 9.8G
> 37M 9.2G 1% /var/crash
> /dev/sda1 200M
> 12M 189M 6% /boot/efi
> tmpfs 13G
> 0 13G 0% /run/user/1000
> tmpfs 13G
> 0 13G 0% /run/user/0
> /dev/mapper/gluster_vg_sdb-gluster_lv_engine 3.7T
> 33M 3.7T 1% /gluster_bricks/engine
> /dev/mapper/gluster_vg_sdc-gluster_lv_data 3.7T
> 34M 3.7T 1% /gluster_bricks/data
> /dev/mapper/gluster_vg_sdd-gluster_lv_vmstore 3.7T
> 34M 3.7T 1% /gluster_bricks/vmstore
>
> The machine had 4 4TB disks, so sda is the installation for oVirt
> node-ng, the other 3 disks for the gluster volumes.
>
> root@br014 ~]# pvs
> PV VG Fmt Attr PSize PFree
> /dev/sda3 onn_br014 lvm2 a-- <3.64t 100.00g
> /dev/sdb gluster_vg_sdb lvm2 a-- <3.64t <26.02g
> /dev/sdc gluster_vg_sdc lvm2 a-- <3.64t 0
> /dev/sdd gluster_vg_sdd lvm2 a-- <3.64t 0
>
> [root@br014 ~]# vgs
> VG #PV #LV #SN Attr VSize VFree
> gluster_vg_sdb 1 1 0 wz--n- <3.64t <26.02g
> gluster_vg_sdc 1 2 0 wz--n- <3.64t 0
> gluster_vg_sdd 1 2 0 wz--n- <3.64t 0
> onn_br014 1 11 0 wz--n- <3.64t 100.00g
>
> [root@br014 ~]# lvs
> LV VG Attr LSize
> Pool Origin Data%
> Meta% Move Log Cpy%Sync Convert
> gluster_lv_engine gluster_vg_sdb -wi-ao----
> 3.61t
>
> gluster_lv_data gluster_vg_sdc Vwi-aot--- 3.61t
> gluster_thinpool_gluster_vg_sdc
> 0.05
> gluster_thinpool_gluster_vg_sdc gluster_vg_sdc twi-aot---
> <3.61t
> 0.05 0.13
> gluster_lv_vmstore gluster_vg_sdd Vwi-aot--- 3.61t
> gluster_thinpool_gluster_vg_sdd
> 0.05
> gluster_thinpool_gluster_vg_sdd gluster_vg_sdd twi-aot---
> <3.61t
> 0.05 0.13
> home onn_br014 Vwi-aotz-- 1.00g
> pool00
> 4.79
> ovirt-node-ng-4.3.3.1-0.20190417.0 onn_br014 Vwi---tz-k <3.51t
> pool00
> root
> ovirt-node-ng-4.3.3.1-0.20190417.0+1 onn_br014 Vwi-aotz-- <3.51t
> pool00 ovirt-node-ng-4.3.3.1-0.20190417.0
> 0.13
> pool00 onn_br014 twi-aotz--
> 3.53t
> 0.19 1.86
> root onn_br014 Vri---tz-k <3.51t
> pool00
>
> swap onn_br014 -wi-ao----
> 4.00g
>
> tmp onn_br014 Vwi-aotz-- 1.00g
> pool00
> 4.84
> var onn_br014 Vwi-aotz-- 15.00g
> pool00
> 3.67
> var_crash onn_br014 Vwi-aotz-- 10.00g
> pool00
> 2.86
> var_log onn_br014 Vwi-aotz-- 8.00g
> pool00
> 3.25
> var_log_audit onn_br014 Vwi-aotz-- 2.00g
> pool00
> 4.86
>
>
>
> Here's the full deploy log from the UI. Let me know if you need specific
> logs.
>
>
> PLAY [Setup backend]
> ***********************************************************
>
> TASK [Gathering Facts]
> *********************************************************
> ok: [
br014.bridges.psc.edu]
>
> TASK [gluster.infra/roles/firewall_config : Start firewalld if not
> already started] ***
> ok: [
br014.bridges.psc.edu]
>
> TASK [gluster.infra/roles/firewall_config : check if required variables
> are set] ***
> skipping: [
br014.bridges.psc.edu]
>
> TASK [gluster.infra/roles/firewall_config : Open/Close firewalld ports]
> ********
> ok: [
br014.bridges.psc.edu] => (item=2049/tcp)
> ok: [
br014.bridges.psc.edu] => (item=54321/tcp)
> ok: [
br014.bridges.psc.edu] => (item=5900/tcp)
> ok: [
br014.bridges.psc.edu] => (item=5900-6923/tcp)
> ok: [
br014.bridges.psc.edu] => (item=5666/tcp)
> ok: [
br014.bridges.psc.edu] => (item=16514/tcp)
>
> TASK [gluster.infra/roles/firewall_config : Add/Delete services to
> firewalld rules] ***
> ok: [
br014.bridges.psc.edu] => (item=glusterfs)
>
> TASK [gluster.infra/roles/backend_setup : Gather facts to determine the
> OS distribution] ***
> ok: [
br014.bridges.psc.edu]
>
> TASK [gluster.infra/roles/backend_setup : Change to Install lvm tools for
> debian systems.] ***
> skipping: [
br014.bridges.psc.edu]
>
> TASK [gluster.infra/roles/backend_setup : Change to Install lvm tools for
> RHEL systems.] ***
> ok: [
br014.bridges.psc.edu]
>
> TASK [gluster.infra/roles/backend_setup : Install python-yaml package for
> Debian systems] ***
> skipping: [
br014.bridges.psc.edu]
>
> TASK [gluster.infra/roles/backend_setup : Initialize vdo_devs array]
> ***********
> ok: [
br014.bridges.psc.edu]
>
> TASK [gluster.infra/roles/backend_setup : Record VDO devices (if any)]
> *********
> skipping: [
br014.bridges.psc.edu] => (item={u'vgname':
> u'gluster_vg_sdb', u'pvname': u'/dev/sdb'})
> skipping: [
br014.bridges.psc.edu] => (item={u'vgname':
> u'gluster_vg_sdc', u'pvname': u'/dev/sdc'})
> skipping: [
br014.bridges.psc.edu] => (item={u'vgname':
> u'gluster_vg_sdd', u'pvname': u'/dev/sdd'})
>
> TASK [gluster.infra/roles/backend_setup : Enable and start vdo service]
> ********
> skipping: [
br014.bridges.psc.edu]
>
> TASK [gluster.infra/roles/backend_setup : Create VDO with specified size]
> ******
> skipping: [
br014.bridges.psc.edu]
>
> TASK [gluster.infra/roles/backend_setup : Check if valid disktype is
> provided] ***
> skipping: [
br014.bridges.psc.edu]
>
> TASK [gluster.infra/roles/backend_setup : Set PV data alignment for JBOD]
> ******
> ok: [
br014.bridges.psc.edu]
>
> TASK [gluster.infra/roles/backend_setup : Set PV data alignment for RAID]
> ******
> skipping: [
br014.bridges.psc.edu]
>
> TASK [gluster.infra/roles/backend_setup : Set VG physical extent size for
> RAID] ***
> skipping: [
br014.bridges.psc.edu]
>
> TASK [gluster.infra/roles/backend_setup : Create volume groups]
> ****************
> ok: [
br014.bridges.psc.edu] => (item={u'vgname':
u'gluster_vg_sdb',
> u'pvname': u'/dev/sdb'})
> ok: [
br014.bridges.psc.edu] => (item={u'vgname':
u'gluster_vg_sdc',
> u'pvname': u'/dev/sdc'})
> ok: [
br014.bridges.psc.edu] => (item={u'vgname':
u'gluster_vg_sdd',
> u'pvname': u'/dev/sdd'})
>
> TASK [gluster.infra/roles/backend_setup : Create thick logical volume]
> *********
> ok: [
br014.bridges.psc.edu] => (item={u'lvname':
u'gluster_lv_engine',
> u'vgname': u'gluster_vg_sdb', u'size': u'3700G'})
>
> TASK [gluster.infra/roles/backend_setup : Calculate chunksize for
> RAID6/RAID10/RAID5] ***
> skipping: [
br014.bridges.psc.edu]
>
> TASK [gluster.infra/roles/backend_setup : Set chunksize for JBOD]
> **************
> ok: [
br014.bridges.psc.edu]
>
> TASK [gluster.infra/roles/backend_setup : Create a LV thinpool]
> ****************
> ok: [
br014.bridges.psc.edu] => (item={u'vgname':
u'gluster_vg_sdc',
> u'thinpoolname': u'gluster_thinpool_gluster_vg_sdc',
u'poolmetadatasize':
> u'16G'})
> ok: [
br014.bridges.psc.edu] => (item={u'vgname':
u'gluster_vg_sdd',
> u'thinpoolname': u'gluster_thinpool_gluster_vg_sdd',
u'poolmetadatasize':
> u'16G'})
>
> TASK [gluster.infra/roles/backend_setup : Create thin logical volume]
> **********
> ok: [
br014.bridges.psc.edu] => (item={u'lvname':
u'gluster_lv_data',
> u'vgname': u'gluster_vg_sdc', u'thinpool':
> u'gluster_thinpool_gluster_vg_sdc', u'lvsize': u'3700G'})
> ok: [
br014.bridges.psc.edu] => (item={u'lvname':
u'gluster_lv_vmstore',
> u'vgname': u'gluster_vg_sdd', u'thinpool':
> u'gluster_thinpool_gluster_vg_sdd', u'lvsize': u'3700G'})
>
> TASK [gluster.infra/roles/backend_setup : Extend volume group]
> *****************
> skipping: [
br014.bridges.psc.edu]
>
> TASK [gluster.infra/roles/backend_setup : Change attributes of LV]
> *************
> skipping: [
br014.bridges.psc.edu]
>
> TASK [gluster.infra/roles/backend_setup : Create LV for cache]
> *****************
> skipping: [
br014.bridges.psc.edu]
>
> TASK [gluster.infra/roles/backend_setup : Create metadata LV for cache]
> ********
> skipping: [
br014.bridges.psc.edu]
>
> TASK [gluster.infra/roles/backend_setup : Convert logical volume to a
> cache pool LV] ***
> skipping: [
br014.bridges.psc.edu]
>
> TASK [gluster.infra/roles/backend_setup : Convert logical volume to a
> cache pool LV without cachemetalvname] ***
> skipping: [
br014.bridges.psc.edu]
>
> TASK [gluster.infra/roles/backend_setup : Convert an existing logical
> volume to a cache LV] ***
> skipping: [
br014.bridges.psc.edu]
>
> TASK [gluster.infra/roles/backend_setup : Set XFS options for JBOD]
> ************
> ok: [
br014.bridges.psc.edu]
>
> TASK [gluster.infra/roles/backend_setup : Set XFS options for RAID
> devices] ****
> skipping: [
br014.bridges.psc.edu]
>
> TASK [gluster.infra/roles/backend_setup : Create filesystem on thin
> logical vols] ***
> ok: [
br014.bridges.psc.edu] => (item={u'lvname':
u'gluster_lv_data',
> u'vgname': u'gluster_vg_sdc', u'thinpool':
> u'gluster_thinpool_gluster_vg_sdc', u'lvsize': u'3700G'})
> ok: [
br014.bridges.psc.edu] => (item={u'lvname':
u'gluster_lv_vmstore',
> u'vgname': u'gluster_vg_sdd', u'thinpool':
> u'gluster_thinpool_gluster_vg_sdd', u'lvsize': u'3700G'})
>
> TASK [gluster.infra/roles/backend_setup : Create filesystem on thick
> logical vols] ***
> ok: [
br014.bridges.psc.edu] => (item={u'lvname':
u'gluster_lv_engine',
> u'vgname': u'gluster_vg_sdb', u'size': u'3700G'})
>
> TASK [gluster.infra/roles/backend_setup : Create mount directories if not
> already present] ***
> ok: [
br014.bridges.psc.edu] => (item={u'path':
> u'/gluster_bricks/engine', u'vgname': u'gluster_vg_sdb',
u'lvname':
> u'gluster_lv_engine'})
> ok: [
br014.bridges.psc.edu] => (item={u'path':
u'/gluster_bricks/data',
> u'vgname': u'gluster_vg_sdc', u'lvname':
u'gluster_lv_data'})
> ok: [
br014.bridges.psc.edu] => (item={u'path':
> u'/gluster_bricks/vmstore', u'vgname': u'gluster_vg_sdd',
u'lvname':
> u'gluster_lv_vmstore'})
>
> TASK [gluster.infra/roles/backend_setup : Set mount options for VDO]
> ***********
> skipping: [
br014.bridges.psc.edu]
>
> TASK [gluster.infra/roles/backend_setup : Mount the vdo devices (If any)]
> ******
> skipping: [
br014.bridges.psc.edu] => (item={u'path':
> u'/gluster_bricks/engine', u'vgname': u'gluster_vg_sdb',
u'lvname':
> u'gluster_lv_engine'})
> skipping: [
br014.bridges.psc.edu] => (item={u'path':
> u'/gluster_bricks/data', u'vgname': u'gluster_vg_sdc',
u'lvname':
> u'gluster_lv_data'})
> skipping: [
br014.bridges.psc.edu] => (item={u'path':
> u'/gluster_bricks/vmstore', u'vgname': u'gluster_vg_sdd',
u'lvname':
> u'gluster_lv_vmstore'})
>
> TASK [gluster.infra/roles/backend_setup : Mount the devices]
> *******************
> ok: [
br014.bridges.psc.edu] => (item={u'path':
> u'/gluster_bricks/engine', u'vgname': u'gluster_vg_sdb',
u'lvname':
> u'gluster_lv_engine'})
> ok: [
br014.bridges.psc.edu] => (item={u'path':
u'/gluster_bricks/data',
> u'vgname': u'gluster_vg_sdc', u'lvname':
u'gluster_lv_data'})
> ok: [
br014.bridges.psc.edu] => (item={u'path':
> u'/gluster_bricks/vmstore', u'vgname': u'gluster_vg_sdd',
u'lvname':
> u'gluster_lv_vmstore'})
>
> TASK [gluster.infra/roles/backend_setup : Set Gluster specific SeLinux
> context on the bricks] ***
> ok: [
br014.bridges.psc.edu] => (item={u'path':
> u'/gluster_bricks/engine', u'vgname': u'gluster_vg_sdb',
u'lvname':
> u'gluster_lv_engine'})
> ok: [
br014.bridges.psc.edu] => (item={u'path':
u'/gluster_bricks/data',
> u'vgname': u'gluster_vg_sdc', u'lvname':
u'gluster_lv_data'})
> ok: [
br014.bridges.psc.edu] => (item={u'path':
> u'/gluster_bricks/vmstore', u'vgname': u'gluster_vg_sdd',
u'lvname':
> u'gluster_lv_vmstore'})
>
> TASK [gluster.infra/roles/backend_setup : restore file(s) default SELinux
> security contexts] ***
> changed: [
br014.bridges.psc.edu] => (item={u'path':
> u'/gluster_bricks/engine', u'vgname': u'gluster_vg_sdb',
u'lvname':
> u'gluster_lv_engine'})
> changed: [
br014.bridges.psc.edu] => (item={u'path':
> u'/gluster_bricks/data', u'vgname': u'gluster_vg_sdc',
u'lvname':
> u'gluster_lv_data'})
> changed: [
br014.bridges.psc.edu] => (item={u'path':
> u'/gluster_bricks/vmstore', u'vgname': u'gluster_vg_sdd',
u'lvname':
> u'gluster_lv_vmstore'})
>
> TASK [gluster.infra/roles/backend_reset : unmount the directories (if
> mounted)] ***
> skipping: [
br014.bridges.psc.edu]
>
> TASK [gluster.infra/roles/backend_reset : Delete volume groups]
> ****************
> skipping: [
br014.bridges.psc.edu]
>
> TASK [gluster.infra/roles/backend_reset : Remove VDO devices]
> ******************
> skipping: [
br014.bridges.psc.edu]
>
> TASK [gluster.features/roles/nfs_ganesha : Create temporary storage
> directory] ***
> skipping: [
br014.bridges.psc.edu]
>
> TASK [gluster.features/roles/nfs_ganesha : Get the name of the directory
> created] ***
> skipping: [
br014.bridges.psc.edu]
>
> TASK [gluster.features/roles/nfs_ganesha : check if
> gluster_features_ganesha_clusternodes is set] ***
> skipping: [
br014.bridges.psc.edu]
>
> TASK [gluster.features/roles/nfs_ganesha : Define service port]
> ****************
> skipping: [
br014.bridges.psc.edu] => (item=^#(STATD_PORT=.*))
> skipping: [
br014.bridges.psc.edu] => (item=^#(LOCKD_TCPPORT=.*))
> skipping: [
br014.bridges.psc.edu] => (item=^#(LOCKD_UDPPORT=.*))
>
> TASK [gluster.features/roles/nfs_ganesha : Check packages installed, if
> not install] ***
> skipping: [
br014.bridges.psc.edu] => (item=glusterfs-ganesha)
> skipping: [
br014.bridges.psc.edu] => (item=nfs-ganesha)
> skipping: [
br014.bridges.psc.edu] => (item=corosync)
> skipping: [
br014.bridges.psc.edu] => (item=pacemaker)
> skipping: [
br014.bridges.psc.edu] => (item=libntirpc)
> skipping: [
br014.bridges.psc.edu] => (item=pcs)
>
> TASK [gluster.features/roles/nfs_ganesha : Restart services]
> *******************
> skipping: [
br014.bridges.psc.edu] => (item=nfslock)
> skipping: [
br014.bridges.psc.edu] => (item=nfs-config)
> skipping: [
br014.bridges.psc.edu] => (item=rpc-statd)
>
> TASK [gluster.features/roles/nfs_ganesha : Stop services]
> **********************
> skipping: [
br014.bridges.psc.edu] => (item=nfs-server)
>
> TASK [gluster.features/roles/nfs_ganesha : Disable service]
> ********************
> skipping: [
br014.bridges.psc.edu] => (item=nfs-server)
>
> TASK [gluster.features/roles/nfs_ganesha : Enable services]
> ********************
> skipping: [
br014.bridges.psc.edu] => (item=glusterfssharedstorage)
> skipping: [
br014.bridges.psc.edu] => (item=nfs-ganesha)
> skipping: [
br014.bridges.psc.edu] => (item=network)
> skipping: [
br014.bridges.psc.edu] => (item=pcsd)
> skipping: [
br014.bridges.psc.edu] => (item=pacemaker)
>
> TASK [gluster.features/roles/nfs_ganesha : Start services]
> *********************
> skipping: [
br014.bridges.psc.edu] => (item=network)
> skipping: [
br014.bridges.psc.edu] => (item=pcsd)
>
> TASK [gluster.features/roles/nfs_ganesha : Create a user hacluster if not
> already present] ***
> skipping: [
br014.bridges.psc.edu]
>
> TASK [gluster.features/roles/nfs_ganesha : Set the password for
> hacluster] *****
> skipping: [
br014.bridges.psc.edu]
>
> TASK [gluster.features/roles/nfs_ganesha : Set the hacluster user the
> same password on new nodes] ***
> skipping: [
br014.bridges.psc.edu]
>
> TASK [gluster.features/roles/nfs_ganesha : Pcs cluster authenticate the
> hacluster on new nodes] ***
> skipping: [
br014.bridges.psc.edu]
>
> TASK [gluster.features/roles/nfs_ganesha : Pause for a few seconds after
> pcs auth] ***
> skipping: [
br014.bridges.psc.edu]
>
> TASK [gluster.features/roles/nfs_ganesha : Set gluster_use_execmem flag
> on and keep it persistent] ***
> skipping: [
br014.bridges.psc.edu] => (item=gluster_use_execmem)
> skipping: [
br014.bridges.psc.edu] => (item=ganesha_use_fusefs)
>
> TASK [gluster.features/roles/nfs_ganesha : check if
> gluster_features_ganesha_masternode is set] ***
> skipping: [
br014.bridges.psc.edu]
>
> TASK [gluster.features/roles/nfs_ganesha : Copy the ssh keys to the local
> machine] ***
> skipping: [
br014.bridges.psc.edu] => (item=secret.pem.pub)
> skipping: [
br014.bridges.psc.edu] => (item=secret.pem)
>
> TASK [gluster.features/roles/nfs_ganesha : check if
> gluster_features_ganesha_newnodes_vip is set] ***
> skipping: [
br014.bridges.psc.edu]
>
> TASK [gluster.features/roles/nfs_ganesha : Copy the public key to remote
> nodes] ***
> skipping: [
br014.bridges.psc.edu]
>
> TASK [gluster.features/roles/nfs_ganesha : Copy the private key to remote
> node] ***
> skipping: [
br014.bridges.psc.edu]
>
> TASK [gluster.features/roles/nfs_ganesha : Deploy the pubkey on all
> nodes] *****
> skipping: [
br014.bridges.psc.edu]
>
> TASK [gluster.features/roles/nfs_ganesha : Make the volume a gluster
> shared volume] ***
> skipping: [
br014.bridges.psc.edu]
>
> TASK [gluster.features/roles/nfs_ganesha : Generate ssh key in one of the
> nodes in HA cluster] ***
> skipping: [
br014.bridges.psc.edu]
>
> TASK [gluster.features/roles/nfs_ganesha : Copy the ssh keys to the local
> machine] ***
> skipping: [
br014.bridges.psc.edu] => (item=secret.pem.pub)
> skipping: [
br014.bridges.psc.edu] => (item=secret.pem)
>
> TASK [gluster.features/roles/nfs_ganesha : Create configuration directory
> for nfs_ganesha] ***
> skipping: [
br014.bridges.psc.edu]
>
> TASK [gluster.features/roles/nfs_ganesha : Copy ganesha.conf to config
> directory on shared volume] ***
> skipping: [
br014.bridges.psc.edu]
>
> TASK [gluster.features/roles/nfs_ganesha : Create ganesha-ha.conf file]
> ********
> skipping: [
br014.bridges.psc.edu]
>
> TASK [gluster.features/roles/nfs_ganesha : Enable NFS Ganesha]
> *****************
> skipping: [
br014.bridges.psc.edu]
>
> TASK [gluster.features/roles/nfs_ganesha : Pause for 30 seconds (takes a
> while to enable NFS Ganesha)] ***
> skipping: [
br014.bridges.psc.edu]
>
> TASK [gluster.features/roles/nfs_ganesha : Check NFS Ganesha status]
> ***********
> skipping: [
br014.bridges.psc.edu]
>
> TASK [gluster.features/roles/nfs_ganesha : Report NFS Ganesha status]
> **********
> skipping: [
br014.bridges.psc.edu]
>
> TASK [gluster.features/roles/nfs_ganesha : Report NFS Ganesha status (If
> any errors)] ***
> skipping: [
br014.bridges.psc.edu]
>
> TASK [gluster.features/roles/nfs_ganesha : check if
> gluster_features_ganesha_volume is set] ***
> skipping: [
br014.bridges.psc.edu]
>
> TASK [gluster.features/roles/nfs_ganesha : Export the NFS Ganesha volume]
> ******
> skipping: [
br014.bridges.psc.edu]
>
> TASK [gluster.features/roles/nfs_ganesha : Copy the public key to remote
> nodes] ***
> skipping: [
br014.bridges.psc.edu]
>
> TASK [gluster.features/roles/nfs_ganesha : Copy the private key to remote
> node] ***
> skipping: [
br014.bridges.psc.edu]
>
> TASK [gluster.features/roles/nfs_ganesha : Deploy the pubkey on all
> nodes] *****
> skipping: [
br014.bridges.psc.edu]
>
> TASK [gluster.features/roles/nfs_ganesha : Adds a node to the cluster]
> *********
> skipping: [
br014.bridges.psc.edu]
>
> TASK [gluster.features/roles/nfs_ganesha : Report ganesha add-node
> status] *****
> skipping: [
br014.bridges.psc.edu]
>
> TASK [gluster.features/roles/nfs_ganesha : Delete the temporary
> directory] *****
> skipping: [
br014.bridges.psc.edu]
>
> TASK [gluster.features/roles/gluster_hci : Check if packages are
> installed, if not install] ***
> ok: [
br014.bridges.psc.edu] => (item=vdsm)
> ok: [
br014.bridges.psc.edu] => (item=vdsm-gluster)
> ok: [
br014.bridges.psc.edu] => (item=ovirt-host)
> ok: [
br014.bridges.psc.edu] => (item=screen)
>
> TASK [gluster.features/roles/gluster_hci : Enable and start glusterd and
> chronyd] ***
> ok: [
br014.bridges.psc.edu] => (item=chronyd)
> ok: [
br014.bridges.psc.edu] => (item=glusterd)
> ok: [
br014.bridges.psc.edu] => (item=firewalld)
>
> TASK [gluster.features/roles/gluster_hci : Add user qemu to gluster
> group] *****
> ok: [
br014.bridges.psc.edu]
>
> TASK [gluster.features/roles/gluster_hci : Disable the hook scripts]
> ***********
> changed: [
br014.bridges.psc.edu] =>
> (item=/var/lib/glusterd/hooks/1/set/post/S30samba-set.sh)
> changed: [
br014.bridges.psc.edu] =>
> (item=/var/lib/glusterd/hooks/1/start/post/S30samba-start.sh)
> changed: [
br014.bridges.psc.edu] =>
> (item=/var/lib/glusterd/hooks/1/stop/pre/S30samba-stop.sh)
> changed: [
br014.bridges.psc.edu] =>
> (item=/var/lib/glusterd/hooks/1/reset/post/S31ganesha-reset.sh)
> changed: [
br014.bridges.psc.edu] =>
> (item=/var/lib/glusterd/hooks/1//start/post/S31ganesha-start.sh)
> changed: [
br014.bridges.psc.edu] =>
> (item=/var/lib/glusterd/hooks/1/start/post/S29CTDBsetup.sh)
> changed: [
br014.bridges.psc.edu] =>
> (item=/var/lib/glusterd/hooks/1/stop/pre/S29CTDB-teardown.sh)
>
> TASK [gluster.features/roles/gluster_hci : Check if valid FQDN is
> provided] ****
> changed: [
br014.bridges.psc.edu -> localhost] => (item=
>
br014.bridges.psc.edu)
>
> TASK [gluster.features/roles/gluster_hci : Check if /var/log has enough
> disk space] ***
> fatal: [
br014.bridges.psc.edu]: FAILED! => {"changed": true,
"cmd": "df
> -m /var/log | awk '/[0-9]%/ {print $4}'", "delta":
"0:00:00.008513", "end":
> "2019-05-09 20:09:27.914400", "failed_when_result": true,
"rc": 0, "start":
> "2019-05-09 20:09:27.905887", "stderr": "",
"stderr_lines": [], "stdout":
> "7470", "stdout_lines": ["7470"]}
>
> NO MORE HOSTS LEFT
> *************************************************************
>
> NO MORE HOSTS LEFT
> *************************************************************
> to retry, use: --limit
> @/usr/share/cockpit/ovirt-dashboard/ansible/hc_wizard.retry
>
> PLAY RECAP
> *********************************************************************
>
br014.bridges.psc.edu : ok=25 changed=3 unreachable=0
> failed=1
>
>
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement:
https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
>
https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
>
https://lists.ovirt.org/archives/list/users@ovirt.org/message/P2WRSR6U67B...
>