While trying to do a hyperconverged setup and trying to use
"configure LV
Cache" /dev/sdf the deployment fails. If I dont use the LV cache SSD Disk
the setup succeds, thought you mighg want to know, for now I retested with
4.3.3 and all worked fine, so reverting to 4.3.3 unless you know of a
workaround?
Error:
TASK [gluster.infra/roles/backend_setup : Extend volume group]
*****************
failed: [
vmm11.mydomain.com] (item={u'vgname': u'gluster_vg_sdb',
u'cachethinpoolname': u'gluster_thinpool_gluster_vg_sdb',
u'cachelvname':
u'cachelv_gluster_thinpool_gluster_vg_sdb', u'cachedisk':
u'/dev/sdf',
u'cachemetalvname': u'cache_gluster_thinpool_gluster_vg_sdb',
u'cachemode':
u'writethrough', u'cachemetalvsize': u'0.1G',
u'cachelvsize': u'0.9G'}) =>
{"ansible_loop_var": "item", "changed": false,
"err": " Physical volume
\"/dev/sdb\" still in use\n", "item": {"cachedisk":
"/dev/sdf",
"cachelvname": "cachelv_gluster_thinpool_gluster_vg_sdb",
"cachelvsize":
"0.9G", "cachemetalvname":
"cache_gluster_thinpool_gluster_vg_sdb",
"cachemetalvsize": "0.1G", "cachemode":
"writethrough",
"cachethinpoolname": "gluster_thinpool_gluster_vg_sdb",
"vgname":
The variable file does not seem to be right.
You have mentioned cachethinpoolname: gluster_thinpool_gluster_vg_sdb but
you are not creating it anywhere.
So, the Ansible module is trying to shrink the volume group.
Also why is cachelvsize is 0.9G and chachemetasize 0.1G? Isn't it too less?
Please refer:
"gluster_vg_sdb"}, "msg": "Unable to reduce gluster_vg_sdb by
/dev/sdb.",
"rc": 5}
failed: [
vmm12.mydomain.com] (item={u'vgname': u'gluster_vg_sdb',
u'cachethinpoolname': u'gluster_thinpool_gluster_vg_sdb',
u'cachelvname':
u'cachelv_gluster_thinpool_gluster_vg_sdb', u'cachedisk':
u'/dev/sdf',
u'cachemetalvname': u'cache_gluster_thinpool_gluster_vg_sdb',
u'cachemode':
u'writethrough', u'cachemetalvsize': u'0.1G',
u'cachelvsize': u'0.9G'}) =>
{"ansible_loop_var": "item", "changed": false,
"err": " Physical volume
\"/dev/sdb\" still in use\n", "item": {"cachedisk":
"/dev/sdf",
"cachelvname": "cachelv_gluster_thinpool_gluster_vg_sdb",
"cachelvsize":
"0.9G", "cachemetalvname":
"cache_gluster_thinpool_gluster_vg_sdb",
"cachemetalvsize": "0.1G", "cachemode":
"writethrough",
"cachethinpoolname": "gluster_thinpool_gluster_vg_sdb",
"vgname":
"gluster_vg_sdb"}, "msg": "Unable to reduce gluster_vg_sdb by
/dev/sdb.",
"rc": 5}
failed: [
vmm10.mydomain.com] (item={u'vgname': u'gluster_vg_sdb',
u'cachethinpoolname': u'gluster_thinpool_gluster_vg_sdb',
u'cachelvname':
u'cachelv_gluster_thinpool_gluster_vg_sdb', u'cachedisk':
u'/dev/sdf',
u'cachemetalvname': u'cache_gluster_thinpool_gluster_vg_sdb',
u'cachemode':
u'writethrough', u'cachemetalvsize': u'30G',
u'cachelvsize': u'270G'}) =>
{"ansible_loop_var": "item", "changed": false,
"err": " Physical volume
\"/dev/sdb\" still in use\n", "item": {"cachedisk":
"/dev/sdf",
"cachelvname": "cachelv_gluster_thinpool_gluster_vg_sdb",
"cachelvsize":
"270G", "cachemetalvname":
"cache_gluster_thinpool_gluster_vg_sdb",
"cachemetalvsize": "30G", "cachemode":
"writethrough", "cachethinpoolname":
"gluster_thinpool_gluster_vg_sdb", "vgname":
"gluster_vg_sdb"}, "msg":
"Unable to reduce gluster_vg_sdb by /dev/sdb.", "rc": 5}
PLAY RECAP
*********************************************************************
vmm10.mydomain.com : ok=13 changed=4 unreachable=0
failed=1 skipped=10 rescued=0 ignored=0
vmm11.mydomain.com : ok=13 changed=4 unreachable=0
failed=1 skipped=10 rescued=0 ignored=0
vmm12.mydomain.com : ok=13 changed=4 unreachable=0
failed=1 skipped=10 rescued=0 ignored=0
---------------------------------------------------------------------------------------------------------------------
#cat /etc/ansible/hc_wizard_inventory.yml
---------------------------------------------------------------------------------------------------------------------
hc_nodes:
hosts:
vmm10.mydomain.com:
gluster_infra_volume_groups:
- vgname: gluster_vg_sdb
pvname: /dev/sdb
- vgname: gluster_vg_sdc
pvname: /dev/sdc
- vgname: gluster_vg_sdd
pvname: /dev/sdd
- vgname: gluster_vg_sde
pvname: /dev/sde
gluster_infra_mount_devices:
- path: /gluster_bricks/engine
lvname: gluster_lv_engine
vgname: gluster_vg_sdb
- path: /gluster_bricks/vmstore1
lvname: gluster_lv_vmstore1
vgname: gluster_vg_sdc
- path: /gluster_bricks/data1
lvname: gluster_lv_data1
vgname: gluster_vg_sdd
- path: /gluster_bricks/data2
lvname: gluster_lv_data2
vgname: gluster_vg_sde
gluster_infra_cache_vars:
- vgname: gluster_vg_sdb
cachedisk: /dev/sdf
cachelvname: cachelv_gluster_thinpool_gluster_vg_sdb
cachethinpoolname: gluster_thinpool_gluster_vg_sdb
cachelvsize: 270G
cachemetalvsize: 30G
cachemetalvname: cache_gluster_thinpool_gluster_vg_sdb
cachemode: writethrough
gluster_infra_thick_lvs:
- vgname: gluster_vg_sdb
lvname: gluster_lv_engine
size: 100G
gluster_infra_thinpools:
- vgname: gluster_vg_sdc
thinpoolname: gluster_thinpool_gluster_vg_sdc
poolmetadatasize: 14G
- vgname: gluster_vg_sdd
thinpoolname: gluster_thinpool_gluster_vg_sdd
poolmetadatasize: 14G
- vgname: gluster_vg_sde
thinpoolname: gluster_thinpool_gluster_vg_sde
poolmetadatasize: 14G
gluster_infra_lv_logicalvols:
- vgname: gluster_vg_sdc
thinpool: gluster_thinpool_gluster_vg_sdc
lvname: gluster_lv_vmstore1
lvsize: 2700G
- vgname: gluster_vg_sdd
thinpool: gluster_thinpool_gluster_vg_sdd
lvname: gluster_lv_data1
lvsize: 2700G
- vgname: gluster_vg_sde
thinpool: gluster_thinpool_gluster_vg_sde
lvname: gluster_lv_data2
lvsize: 2700G
vmm11.mydomain.com:
gluster_infra_volume_groups:
- vgname: gluster_vg_sdb
pvname: /dev/sdb
- vgname: gluster_vg_sdc
pvname: /dev/sdc
- vgname: gluster_vg_sdd
pvname: /dev/sdd
- vgname: gluster_vg_sde
pvname: /dev/sde
gluster_infra_mount_devices:
- path: /gluster_bricks/engine
lvname: gluster_lv_engine
vgname: gluster_vg_sdb
- path: /gluster_bricks/vmstore1
lvname: gluster_lv_vmstore1
vgname: gluster_vg_sdc
- path: /gluster_bricks/data1
lvname: gluster_lv_data1
vgname: gluster_vg_sdd
- path: /gluster_bricks/data2
lvname: gluster_lv_data2
vgname: gluster_vg_sde
gluster_infra_cache_vars:
- vgname: gluster_vg_sdb
cachedisk: /dev/sdf
cachelvname: cachelv_gluster_thinpool_gluster_vg_sdb
cachethinpoolname: gluster_thinpool_gluster_vg_sdb
cachelvsize: 0.9G
cachemetalvsize: 0.1G
cachemetalvname: cache_gluster_thinpool_gluster_vg_sdb
cachemode: writethrough
gluster_infra_thick_lvs:
- vgname: gluster_vg_sdb
lvname: gluster_lv_engine
size: 100G
gluster_infra_thinpools:
- vgname: gluster_vg_sdc
thinpoolname: gluster_thinpool_gluster_vg_sdc
poolmetadatasize: 14G
- vgname: gluster_vg_sdd
thinpoolname: gluster_thinpool_gluster_vg_sdd
poolmetadatasize: 14G
- vgname: gluster_vg_sde
thinpoolname: gluster_thinpool_gluster_vg_sde
poolmetadatasize: 14G
gluster_infra_lv_logicalvols:
- vgname: gluster_vg_sdc
thinpool: gluster_thinpool_gluster_vg_sdc
lvname: gluster_lv_vmstore1
lvsize: 2700G
- vgname: gluster_vg_sdd
thinpool: gluster_thinpool_gluster_vg_sdd
lvname: gluster_lv_data1
lvsize: 2700G
- vgname: gluster_vg_sde
thinpool: gluster_thinpool_gluster_vg_sde
lvname: gluster_lv_data2
lvsize: 2700G
vmm12.mydomain.com:
gluster_infra_volume_groups:
- vgname: gluster_vg_sdb
pvname: /dev/sdb
- vgname: gluster_vg_sdc
pvname: /dev/sdc
- vgname: gluster_vg_sdd
pvname: /dev/sdd
- vgname: gluster_vg_sde
pvname: /dev/sde
gluster_infra_mount_devices:
- path: /gluster_bricks/engine
lvname: gluster_lv_engine
vgname: gluster_vg_sdb
- path: /gluster_bricks/vmstore1
lvname: gluster_lv_vmstore1
vgname: gluster_vg_sdc
- path: /gluster_bricks/data1
lvname: gluster_lv_data1
vgname: gluster_vg_sdd
- path: /gluster_bricks/data2
lvname: gluster_lv_data2
vgname: gluster_vg_sde
gluster_infra_cache_vars:
- vgname: gluster_vg_sdb
cachedisk: /dev/sdf
cachelvname: cachelv_gluster_thinpool_gluster_vg_sdb
cachethinpoolname: gluster_thinpool_gluster_vg_sdb
cachelvsize: 0.9G
cachemetalvsize: 0.1G
cachemetalvname: cache_gluster_thinpool_gluster_vg_sdb
cachemode: writethrough
gluster_infra_thick_lvs:
- vgname: gluster_vg_sdb
lvname: gluster_lv_engine
size: 100G
gluster_infra_thinpools:
- vgname: gluster_vg_sdc
thinpoolname: gluster_thinpool_gluster_vg_sdc
poolmetadatasize: 14G
- vgname: gluster_vg_sdd
thinpoolname: gluster_thinpool_gluster_vg_sdd
poolmetadatasize: 14G
- vgname: gluster_vg_sde
thinpoolname: gluster_thinpool_gluster_vg_sde
poolmetadatasize: 14G
gluster_infra_lv_logicalvols:
- vgname: gluster_vg_sdc
thinpool: gluster_thinpool_gluster_vg_sdc
lvname: gluster_lv_vmstore1
lvsize: 2700G
- vgname: gluster_vg_sdd
thinpool: gluster_thinpool_gluster_vg_sdd
lvname: gluster_lv_data1
lvsize: 2700G
- vgname: gluster_vg_sde
thinpool: gluster_thinpool_gluster_vg_sde
lvname: gluster_lv_data2
lvsize: 2700G
vars:
gluster_infra_disktype: JBOD
gluster_set_selinux_labels: true
gluster_infra_fw_ports:
- 2049/tcp
- 54321/tcp
- 5900/tcp
- 5900-6923/tcp
- 5666/tcp
- 16514/tcp
gluster_infra_fw_permanent: true
gluster_infra_fw_state: enabled
gluster_infra_fw_zone: public
gluster_infra_fw_services:
- glusterfs
gluster_features_force_varlogsizecheck: false
cluster_nodes:
-
vmm10.mydomain.com
-
vmm11.mydomain.com
-
vmm12.mydomain.com
gluster_features_hci_cluster: '{{ cluster_nodes }}'
gluster_features_hci_volumes:
- volname: engine
brick: /gluster_bricks/engine/engine
arbiter: 0
- volname: vmstore1
brick: /gluster_bricks/vmstore1/vmstore1
arbiter: 0
- volname: data1
brick: /gluster_bricks/data1/data1
arbiter: 0
- volname: data2
brick: /gluster_bricks/data2/data2
arbiter: false
---------------------------------------------------------------------------------------------------------------------
Thanks
_______________________________________________
Users mailing list -- users(a)ovirt.org
To unsubscribe send an email to users-leave(a)ovirt.org
Privacy Statement:
https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YYBK7FRRZJM...