
Hello Everyone, Using version 4.2.8, ( ovirt-node-ng-installer-4.2.0-2019012606.el7.iso ) for deploying one node instance by following from within CockpitUI seems not to be possible. Here's the generated inventory ( i've specified "jbod" in the wizard ): #gdeploy configuration generated by cockpit-gluster plugin [hosts] 192.168.80.191 [script1:192.168.80.191] action=execute ignore_script_errors=no file=/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d sdb -h 192.168.80.191 [disktype] jbod [service1] action=enable service=chronyd [service2] action=restart service=chronyd [shell2] action=execute command=vdsm-tool configure --force [script3] action=execute file=/usr/share/gdeploy/scripts/blacklist_all_disks.sh ignore_script_errors=no [pv1:192.168.80.191] action=create devices=sdb ignore_pv_errors=no [vg1:192.168.80.191] action=create vgname=gluster_vg_sdb pvname=sdb ignore_vg_errors=no [lv1:192.168.80.191] action=create lvname=gluster_lv_engine ignore_lv_errors=no vgname=gluster_vg_sdb mount=/gluster_bricks/engine size=230GB lvtype=thick [selinux] yes [service3] action=restart service=glusterd slice_setup=yes [firewalld] action=add ports=111/tcp,2049/tcp,54321/tcp,5900/tcp,5900-6923/tcp,5666/tcp,16514/tcp,54322/tcp services=glusterfs [script2] action=execute file=/usr/share/gdeploy/scripts/disable-gluster-hooks.sh [shell3] action=execute command=usermod -a -G gluster qemu [volume1] action=create volname=engine transport=tcp key=storage.owner-uid,storage.owner-gid,features.shard,performance.low-prio-threads,performance.strict-o-direct,network.remote-dio,network.ping-timeout,user.cifs,nfs.disable,performance.quick-read,performance.read-ahead,performance.io-cache,cluster.eager-lock value=36,36,on,32,on,off,30,off,on,off,off,off,enable brick_dirs=192.168.80.191:/gluster_bricks/engine/engine ignore_volume_errors=no It does not get to finish, throwing the following error: PLAY [gluster_servers] ********************************************************* TASK [Create volume group on the disks] **************************************** changed: [192.168.80.191] => (item={u'brick': u'/dev/sdb', u'vg': u'gluster_vg_sdb'}) PLAY RECAP ********************************************************************* 192.168.80.191 : ok=1 changed=1 unreachable=0 failed=0 *Error: Section diskcount not found in the configuration file* Any thoughts ? -- Best regards, Leo David

Hi, It seems so that I had to manually add the sections, to make the scrip working: [diskcount] 12 [stripesize] 256 It looks like ansible is still searching for these sections regardless that I have configured "jbod" in the wizard... Thanks, Leo On Sun, Jan 27, 2019 at 10:49 AM Leo David <leoalex@gmail.com> wrote:
Hello Everyone, Using version 4.2.8, ( ovirt-node-ng-installer-4.2.0-2019012606.el7.iso ) for deploying one node instance by following from within CockpitUI seems not to be possible. Here's the generated inventory ( i've specified "jbod" in the wizard ):
#gdeploy configuration generated by cockpit-gluster plugin [hosts] 192.168.80.191
[script1:192.168.80.191] action=execute ignore_script_errors=no file=/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d sdb -h 192.168.80.191 [disktype] jbod [service1] action=enable service=chronyd [service2] action=restart service=chronyd [shell2] action=execute command=vdsm-tool configure --force [script3] action=execute file=/usr/share/gdeploy/scripts/blacklist_all_disks.sh ignore_script_errors=no [pv1:192.168.80.191] action=create devices=sdb ignore_pv_errors=no [vg1:192.168.80.191] action=create vgname=gluster_vg_sdb pvname=sdb ignore_vg_errors=no [lv1:192.168.80.191] action=create lvname=gluster_lv_engine ignore_lv_errors=no vgname=gluster_vg_sdb mount=/gluster_bricks/engine size=230GB lvtype=thick [selinux] yes [service3] action=restart service=glusterd slice_setup=yes [firewalld] action=add
ports=111/tcp,2049/tcp,54321/tcp,5900/tcp,5900-6923/tcp,5666/tcp,16514/tcp,54322/tcp services=glusterfs [script2] action=execute file=/usr/share/gdeploy/scripts/disable-gluster-hooks.sh [shell3] action=execute command=usermod -a -G gluster qemu [volume1] action=create volname=engine transport=tcp
key=storage.owner-uid,storage.owner-gid,features.shard,performance.low-prio-threads,performance.strict-o-direct,network.remote-dio,network.ping-timeout,user.cifs,nfs.disable,performance.quick-read,performance.read-ahead,performance.io-cache,cluster.eager-lock value=36,36,on,32,on,off,30,off,on,off,off,off,enable brick_dirs=192.168.80.191:/gluster_bricks/engine/engine ignore_volume_errors=no
It does not get to finish, throwing the following error:
PLAY [gluster_servers] ********************************************************* TASK [Create volume group on the disks] **************************************** changed: [192.168.80.191] => (item={u'brick': u'/dev/sdb', u'vg': u'gluster_vg_sdb'}) PLAY RECAP ********************************************************************* 192.168.80.191 : ok=1 changed=1 unreachable=0 failed=0 *Error: Section diskcount not found in the configuration file*
Any thoughts ?
-- Best regards, Leo David
-- Best regards, Leo David

Hi David, Can you please check the gdeploy version? This bug was fixed last year: https://bugzilla.redhat.com/show_bug.cgi?id=1626513 And is part of: gdeploy-2.0.2-29 On Sun, Jan 27, 2019 at 2:38 PM Leo David <leoalex@gmail.com> wrote:
Hi, It seems so that I had to manually add the sections, to make the scrip working: [diskcount] 12 [stripesize] 256
It looks like ansible is still searching for these sections regardless that I have configured "jbod" in the wizard...
Thanks,
Leo
On Sun, Jan 27, 2019 at 10:49 AM Leo David <leoalex@gmail.com> wrote:
Hello Everyone, Using version 4.2.8, ( ovirt-node-ng-installer-4.2.0-2019012606.el7.iso ) for deploying one node instance by following from within CockpitUI seems not to be possible. Here's the generated inventory ( i've specified "jbod" in the wizard ):
#gdeploy configuration generated by cockpit-gluster plugin [hosts] 192.168.80.191
[script1:192.168.80.191] action=execute ignore_script_errors=no file=/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d sdb -h 192.168.80.191 [disktype] jbod [service1] action=enable service=chronyd [service2] action=restart service=chronyd [shell2] action=execute command=vdsm-tool configure --force [script3] action=execute file=/usr/share/gdeploy/scripts/blacklist_all_disks.sh ignore_script_errors=no [pv1:192.168.80.191] action=create devices=sdb ignore_pv_errors=no [vg1:192.168.80.191] action=create vgname=gluster_vg_sdb pvname=sdb ignore_vg_errors=no [lv1:192.168.80.191] action=create lvname=gluster_lv_engine ignore_lv_errors=no vgname=gluster_vg_sdb mount=/gluster_bricks/engine size=230GB lvtype=thick [selinux] yes [service3] action=restart service=glusterd slice_setup=yes [firewalld] action=add
ports=111/tcp,2049/tcp,54321/tcp,5900/tcp,5900-6923/tcp,5666/tcp,16514/tcp,54322/tcp services=glusterfs [script2] action=execute file=/usr/share/gdeploy/scripts/disable-gluster-hooks.sh [shell3] action=execute command=usermod -a -G gluster qemu [volume1] action=create volname=engine transport=tcp
key=storage.owner-uid,storage.owner-gid,features.shard,performance.low-prio-threads,performance.strict-o-direct,network.remote-dio,network.ping-timeout,user.cifs,nfs.disable,performance.quick-read,performance.read-ahead,performance.io-cache,cluster.eager-lock value=36,36,on,32,on,off,30,off,on,off,off,off,enable brick_dirs=192.168.80.191:/gluster_bricks/engine/engine ignore_volume_errors=no
It does not get to finish, throwing the following error:
PLAY [gluster_servers] ********************************************************* TASK [Create volume group on the disks] **************************************** changed: [192.168.80.191] => (item={u'brick': u'/dev/sdb', u'vg': u'gluster_vg_sdb'}) PLAY RECAP ********************************************************************* 192.168.80.191 : ok=1 changed=1 unreachable=0 failed=0 *Error: Section diskcount not found in the configuration file*
Any thoughts ?
-- Best regards, Leo David
-- Best regards, Leo David _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/Z2X45A6V6WQC3D...
-- Thanks, Gobinda

Hi Gobinda, gdeploy --version gdeploy 2.0.2 yum list installed | grep gdeploy gdeploy.noarch 2.0.8-1.el7 installed Thank you ! On Mon, Jan 28, 2019 at 10:56 AM Gobinda Das <godas@redhat.com> wrote:
Hi David, Can you please check the gdeploy version? This bug was fixed last year: https://bugzilla.redhat.com/show_bug.cgi?id=1626513 And is part of: gdeploy-2.0.2-29
On Sun, Jan 27, 2019 at 2:38 PM Leo David <leoalex@gmail.com> wrote:
Hi, It seems so that I had to manually add the sections, to make the scrip working: [diskcount] 12 [stripesize] 256
It looks like ansible is still searching for these sections regardless that I have configured "jbod" in the wizard...
Thanks,
Leo
On Sun, Jan 27, 2019 at 10:49 AM Leo David <leoalex@gmail.com> wrote:
Hello Everyone, Using version 4.2.8, ( ovirt-node-ng-installer-4.2.0-2019012606.el7.iso ) for deploying one node instance by following from within CockpitUI seems not to be possible. Here's the generated inventory ( i've specified "jbod" in the wizard ):
#gdeploy configuration generated by cockpit-gluster plugin [hosts] 192.168.80.191
[script1:192.168.80.191] action=execute ignore_script_errors=no file=/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d sdb -h 192.168.80.191 [disktype] jbod [service1] action=enable service=chronyd [service2] action=restart service=chronyd [shell2] action=execute command=vdsm-tool configure --force [script3] action=execute file=/usr/share/gdeploy/scripts/blacklist_all_disks.sh ignore_script_errors=no [pv1:192.168.80.191] action=create devices=sdb ignore_pv_errors=no [vg1:192.168.80.191] action=create vgname=gluster_vg_sdb pvname=sdb ignore_vg_errors=no [lv1:192.168.80.191] action=create lvname=gluster_lv_engine ignore_lv_errors=no vgname=gluster_vg_sdb mount=/gluster_bricks/engine size=230GB lvtype=thick [selinux] yes [service3] action=restart service=glusterd slice_setup=yes [firewalld] action=add
ports=111/tcp,2049/tcp,54321/tcp,5900/tcp,5900-6923/tcp,5666/tcp,16514/tcp,54322/tcp services=glusterfs [script2] action=execute file=/usr/share/gdeploy/scripts/disable-gluster-hooks.sh [shell3] action=execute command=usermod -a -G gluster qemu [volume1] action=create volname=engine transport=tcp
key=storage.owner-uid,storage.owner-gid,features.shard,performance.low-prio-threads,performance.strict-o-direct,network.remote-dio,network.ping-timeout,user.cifs,nfs.disable,performance.quick-read,performance.read-ahead,performance.io-cache,cluster.eager-lock value=36,36,on,32,on,off,30,off,on,off,off,off,enable brick_dirs=192.168.80.191:/gluster_bricks/engine/engine ignore_volume_errors=no
It does not get to finish, throwing the following error:
PLAY [gluster_servers] ********************************************************* TASK [Create volume group on the disks] **************************************** changed: [192.168.80.191] => (item={u'brick': u'/dev/sdb', u'vg': u'gluster_vg_sdb'}) PLAY RECAP ********************************************************************* 192.168.80.191 : ok=1 changed=1 unreachable=0 failed=0 *Error: Section diskcount not found in the configuration file*
Any thoughts ?
-- Best regards, Leo David
-- Best regards, Leo David _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/Z2X45A6V6WQC3D...
--
Thanks, Gobinda
-- Best regards, Leo David

Hi David, Thanks! Adding sac to check if we are missing anything for gdeploy. On Mon, Jan 28, 2019 at 4:33 PM Leo David <leoalex@gmail.com> wrote:
Hi Gobinda, gdeploy --version gdeploy 2.0.2
yum list installed | grep gdeploy gdeploy.noarch 2.0.8-1.el7 installed
Thank you !
On Mon, Jan 28, 2019 at 10:56 AM Gobinda Das <godas@redhat.com> wrote:
Hi David, Can you please check the gdeploy version? This bug was fixed last year: https://bugzilla.redhat.com/show_bug.cgi?id=1626513 And is part of: gdeploy-2.0.2-29
On Sun, Jan 27, 2019 at 2:38 PM Leo David <leoalex@gmail.com> wrote:
Hi, It seems so that I had to manually add the sections, to make the scrip working: [diskcount] 12 [stripesize] 256
It looks like ansible is still searching for these sections regardless that I have configured "jbod" in the wizard...
Thanks,
Leo
On Sun, Jan 27, 2019 at 10:49 AM Leo David <leoalex@gmail.com> wrote:
Hello Everyone, Using version 4.2.8, ( ovirt-node-ng-installer-4.2.0-2019012606.el7.iso ) for deploying one node instance by following from within CockpitUI seems not to be possible. Here's the generated inventory ( i've specified "jbod" in the wizard ):
#gdeploy configuration generated by cockpit-gluster plugin [hosts] 192.168.80.191
[script1:192.168.80.191] action=execute ignore_script_errors=no file=/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d sdb -h 192.168.80.191 [disktype] jbod [service1] action=enable service=chronyd [service2] action=restart service=chronyd [shell2] action=execute command=vdsm-tool configure --force [script3] action=execute file=/usr/share/gdeploy/scripts/blacklist_all_disks.sh ignore_script_errors=no [pv1:192.168.80.191] action=create devices=sdb ignore_pv_errors=no [vg1:192.168.80.191] action=create vgname=gluster_vg_sdb pvname=sdb ignore_vg_errors=no [lv1:192.168.80.191] action=create lvname=gluster_lv_engine ignore_lv_errors=no vgname=gluster_vg_sdb mount=/gluster_bricks/engine size=230GB lvtype=thick [selinux] yes [service3] action=restart service=glusterd slice_setup=yes [firewalld] action=add
ports=111/tcp,2049/tcp,54321/tcp,5900/tcp,5900-6923/tcp,5666/tcp,16514/tcp,54322/tcp services=glusterfs [script2] action=execute file=/usr/share/gdeploy/scripts/disable-gluster-hooks.sh [shell3] action=execute command=usermod -a -G gluster qemu [volume1] action=create volname=engine transport=tcp
key=storage.owner-uid,storage.owner-gid,features.shard,performance.low-prio-threads,performance.strict-o-direct,network.remote-dio,network.ping-timeout,user.cifs,nfs.disable,performance.quick-read,performance.read-ahead,performance.io-cache,cluster.eager-lock value=36,36,on,32,on,off,30,off,on,off,off,off,enable brick_dirs=192.168.80.191:/gluster_bricks/engine/engine ignore_volume_errors=no
It does not get to finish, throwing the following error:
PLAY [gluster_servers] ********************************************************* TASK [Create volume group on the disks] **************************************** changed: [192.168.80.191] => (item={u'brick': u'/dev/sdb', u'vg': u'gluster_vg_sdb'}) PLAY RECAP ********************************************************************* 192.168.80.191 : ok=1 changed=1 unreachable=0 failed=0 *Error: Section diskcount not found in the configuration file*
Any thoughts ?
-- Best regards, Leo David
-- Best regards, Leo David _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/Z2X45A6V6WQC3D...
--
Thanks, Gobinda
-- Best regards, Leo David
-- Thanks, Gobinda

Hi David, On Mon, Jan 28, 2019 at 5:01 PM Gobinda Das <godas@redhat.com> wrote:
Hi David, Thanks! Adding sac to check if we are missing anything for gdeploy.
On Mon, Jan 28, 2019 at 4:33 PM Leo David <leoalex@gmail.com> wrote:
Hi Gobinda, gdeploy --version gdeploy 2.0.2
yum list installed | grep gdeploy gdeploy.noarch 2.0.8-1.el7 installed
Ramakrishna will build a fedora package to include that fix. Should be available to you in some time. Will keep you posted.
-sac

On Mon, Jan 28, 2019 at 5:17 PM Sachidananda URS <surs@redhat.com> wrote:
Hi David,
On Mon, Jan 28, 2019 at 5:01 PM Gobinda Das <godas@redhat.com> wrote:
Hi David, Thanks! Adding sac to check if we are missing anything for gdeploy.
On Mon, Jan 28, 2019 at 4:33 PM Leo David <leoalex@gmail.com> wrote:
Hi Gobinda, gdeploy --version gdeploy 2.0.2
yum list installed | grep gdeploy gdeploy.noarch 2.0.8-1.el7 installed
Ramakrishna will build a fedora package to include that fix. Should be available to you in some time. Will keep you posted.
The packages in Fedora 28 and Fedora 29 have been updated. You should be able to update it to the latest version now. I have raised a PR CentOS package update to include the same version as that of Fedora 28/29. regards -- ############################## Ramakrishna Reddy Yekulla rreddy@redhat.com M +91-9823642625 IRC Nick :: ramkrsna , ramky #############################

Hey Guys/Gals, did you update the gdeploy for CentOS ? It seems to not be working - now it doesn't honour the whole cockpit wizard. Instead of JBOD - it selects raid6, instead of md0 - it uses sdb , etc. [root@ovirt1 ~]# gdeploy --version gdeploy 2.0.2 [root@ovirt1 ~]# rpm -qa gdeploy gdeploy-2.0.8-1.el7.noarch Note: This is a fresh install. Best Regards, Strahil Nikolov

On Thu, Jan 31, 2019 at 8:01 AM Strahil Nikolov <hunter86_bg@yahoo.com> wrote:
Hey Guys/Gals,
did you update the gdeploy for CentOS ?
gdeploy is updated for Fedora, for CentOS the packages will be updated shortly, we are testing the packages. However, this issue you are facing where RAID is selected over JBOD is strange. Gobinda will look into this, and might need more details.
It seems to not be working - now it doesn't honour the whole cockpit wizard. Instead of JBOD - it selects raid6, instead of md0 - it uses sdb , etc. [root@ovirt1 ~]# gdeploy --version gdeploy 2.0.2 [root@ovirt1 ~]# rpm -qa gdeploy gdeploy-2.0.8-1.el7.noarch
Note: This is a fresh install.
Best Regards, Strahil Nikolov

Hi All, I have managed to fix this by reinstalling gdeploy package. Yet, it still asks for "Disckount" section - but as the fix was not rolled for CentOS yet - this is expected. Best Regards,Strahil Nikolov On Thu, Jan 31, 2019 at 8:01 AM Strahil Nikolov <hunter86_bg@yahoo.com> wrote: Hey Guys/Gals, did you update the gdeploy for CentOS ? gdeploy is updated for Fedora, for CentOS the packages will be updated shortly, we are testing the packages. However, this issue you are facing where RAID is selected over JBOD is strange.Gobinda will look into this, and might need more details. It seems to not be working - now it doesn't honour the whole cockpit wizard.Instead of JBOD - it selects raid6, instead of md0 - it uses sdb , etc.[root@ovirt1 ~]# gdeploy --versiongdeploy 2.0.2[root@ovirt1 ~]# rpm -qa gdeploygdeploy-2.0.8-1.el7.noarch Note: This is a fresh install. Best Regards,Strahil Nikolov

On Thu, Jan 31, 2019 at 12:48 PM Strahil Nikolov <hunter86_bg@yahoo.com> wrote:
Hi All,
I have managed to fix this by reinstalling gdeploy package. Yet, it still asks for "Disckount" section - but as the fix was not rolled for CentOS yet - this is expected.
Till the CentOS team includes the package, you can provide the diskcount as workaround. This anyway will not be used (no side-effects). -sac
participants (5)
-
Gobinda Das
-
Leo David
-
Ramakrishna Reddy Yekulla
-
Sachidananda URS
-
Strahil Nikolov